Hacker News new | past | comments | ask | show | jobs | submit login

At least in the EU, where Klarna originates and is headquartered, using AI as part of an automated decision making process would be illegal under GDPR.

'Payday loan companies' is quite a loaded term, generally used for companies with predatorially high interest rates (three-plus figure APRs). Klarna on the other hand is relatively competitive with a normal credit card, with rates of 20-30% APR typical.

I've never used Klarna myself, but I can see myself using it as part of a plan to buy expensive goods (furniture, white goods, etc.), and I don't think that there's any particular problem with it (from a consumer perspective) that makes it worse than a normal credit card.

Whether their business can support as many write-offs as it sounds like they make is ultimately not something I can comment on.




I use them sometimes. More so as I can pay stuff as invoice. Meaning that if I don't get stuff for some reason there is Klarna in between. Kinda similar to credit card, but less hassle with chargebacks...


> At least in the EU, where Klarna originates and is headquartered, using AI as part of an automated decision making process would be illegal under GDPR.

Could you source this affirmation?


It's article 22 (https://gdpr-info.eu/art-22-gdpr/) "The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her." - with the emphasis on solely, and it does have certain caveats which perhaps maybe might make it permissible.

Like, this restricts automatic refusal of service due to automated profiling, but doesn't restrict automatic acceptance of service due to automated profiling, and it doesn't restrict automatic recommendation to refuse service which then is rapidly 'reviewed' by a human.


While I agree it doesn’t exclude “human in the loop” necessarily, but there is a lack of clarity as yet whether a decision made by an AI to flag something for manual review would also qualify as a “decision” in this context.

It’s also not explicit that the “legal effects” need be negative, just significant, which I think entering into a loan agreement probably is.

On the plus side, I don’t think it will take too long for case law to develop on these points.


Wouldn't this also forbid payment fraud detection algorithms for example?




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: