Hacker News new | past | comments | ask | show | jobs | submit login

It would not be just an "if X < N". Those decisions are going to depend on a lot of variables besides income such as credit history, assets, employment history, debts, and more.

For someone with a great credit history, lots of assets, a long term job in a stable position, and low debt they might be approved with a lower income than someone with a poor credit history whose income comes from a job in a volatile field.

There might be some absolute requirements, such as the person have a certain minimum income independent of all those other factors, and they they have a certain minimum credit score, and so on. If the application is rejected because it doesn't meet one of those then sure, you can just do a simple check and report that.

But most applications will be above the absolute minimums in all parameters and the rejection is because some more complicated function of all the criteria didn't meet the requirements.

But you can't just tell the person "We put all your numbers into this black box and it said 'no'. You have to give them specific reasons their application was rejected.




Doesnt all this contradict what I initially replyed to?


I don't see any contradiction.

Say a lender has used machine learning to train some sort of black box to take in loan applications and respond with an approve/reject response. If they reject an application using that the Equal Credit Opportunity Act in the US require that they tell the applicant a specific reason for the rejection. They can't just say "our machine learning model said no".

If there were not using any kind of machine learning system they probably would have made the decision according to some series of rules, like "modified income must be X times the monthly payment on the loan", where modified income is the person's monthly income with adjustments for various things. Adjustments might be multipliers based on credit score, debt, and other things.

With that kind of system they would be able to tell you specifically why your were rejected. Say you need a modified income of $75k and you are a little short. They could look at their rules and figure out that you could get a modified income of $75k if you raised your income by a specific amount or lowered your debt by a specific amount, or by some combination of those.

That kind of feedback is useful to the application. It tells them specific things they can do to improve their chances.

With the company using a machine learning black box they don't know the rules that the machine has learned. Hence my suggestion of asking the black box what-if scenarios to figure out specific things the applicant can change to get approval.


>That kind of feedback is useful to the application. It tells them specific things they can do to improve their chances.

In that sense it's very practical, but it kicks the can down the road. Maybe the thing has a hidden parameter that represents the risk of the applicant being fired, which increases the threshold by 5% if the salary is a round number. Or it is more likely to reject everyone with an income between 73 and 75k because it learned this is a proxy to a parameter you are explicitly forbidden to have.

Let's just say it doesn't have a discontinuity, and actually produces the threshold which is deterministically compared with your income. How does it come up with this threshold? You may not be required to disclosed that to the applicant, but it would be a shame if people will figure out that the threshold is consistently higher for a certain population (for example people who's given name ends with a vowel).

It's fairly reasonable to for a regulator to ask you to demonstrate it doesn't do any of this things.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: