> Under the bloc’s approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight; and (4) unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely.
> Some of the unacceptable activities include:
> AI used for social scoring (e.g., building risk profiles based on a person’s behavior).
> AI that manipulates a person’s decisions subliminally or deceptively.
> AI that exploits vulnerabilities like age, disability, or socioeconomic status.
> AI that attempts to predict people committing crimes based on their appearance.
> AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.
> AI that collects “real time” biometric data in public places for the purposes of law enforcement.
> AI that tries to infer people’s emotions at work or school.
> AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
I’m glad they marked things like email spam filters as low risk. But the list of unacceptable risks is still very large. For example even product recommendations on an online store could be considered “manipulating a person’s decisions on subliminally” - even a non-AI ad is doing just that.
I also don’t think it is reasonable to prevent people from inferring or predicting things based on learned factors. It’s one thing if an AI is directed to be discriminatory. But if it learns that one particular trait or the other is predictive of something else, should we really be banning that? I can see that having bad consequences. For example, younger males are more likely to drive recklessly - if they are scored for risk higher, is that really a bad thing?
Stepping back, I think this is part of a long trend of EU over-regulating themselves into stagnation. I think this political culture is going to severely hurt them in the long term. Anyone who wants to build a great business will just go do it elsewhere.
> Under the bloc’s approach, there are four broad risk levels: (1) Minimal risk (e.g., email spam filters) will face no regulatory oversight; (2) limited risk, which includes customer service chatbots, will have a light-touch regulatory oversight; (3) high risk — AI for healthcare recommendations is one example — will face heavy regulatory oversight; and (4) unacceptable risk applications — the focus of this month’s compliance requirements — will be prohibited entirely.
> Some of the unacceptable activities include:
> AI used for social scoring (e.g., building risk profiles based on a person’s behavior).
> AI that manipulates a person’s decisions subliminally or deceptively.
> AI that exploits vulnerabilities like age, disability, or socioeconomic status.
> AI that attempts to predict people committing crimes based on their appearance.
> AI that uses biometrics to infer a person’s characteristics, like their sexual orientation.
> AI that collects “real time” biometric data in public places for the purposes of law enforcement.
> AI that tries to infer people’s emotions at work or school.
> AI that creates — or expands — facial recognition databases by scraping images online or from security cameras.
I’m glad they marked things like email spam filters as low risk. But the list of unacceptable risks is still very large. For example even product recommendations on an online store could be considered “manipulating a person’s decisions on subliminally” - even a non-AI ad is doing just that.
I also don’t think it is reasonable to prevent people from inferring or predicting things based on learned factors. It’s one thing if an AI is directed to be discriminatory. But if it learns that one particular trait or the other is predictive of something else, should we really be banning that? I can see that having bad consequences. For example, younger males are more likely to drive recklessly - if they are scored for risk higher, is that really a bad thing?
Stepping back, I think this is part of a long trend of EU over-regulating themselves into stagnation. I think this political culture is going to severely hurt them in the long term. Anyone who wants to build a great business will just go do it elsewhere.