If you actually go read the linked article, it's pretty clearly spelled out:
The FTC has decades of experience enforcing three laws important to developers and users of AI:
Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.
Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.
Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.
Seems pretty clear cut to me that they're saying you can't use the 'but the AI did it' excuse if the net result is that your product is violating these.
If you actually go read the FTC act, section 5 is about preventing unfair methods of competition. It's massive stretch to interpret that as "deploying an unfair consumer algorithm".
(1) Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful.
Now, IANAL, but it seems to me that the example the FTC gave about racially biased 'AI' would fall well under the second clause.
> The proper inquiry is not whether a contractual relationship existed between the parties, but rather whether the defendant’s allegedly deceptive acts affected commerce.
I'm really not sure that "gave different demographics of users different responses to optimize product satisfaction" is considered in any way an "unfair or deceptive" practice "in or affecting commerce".
> I'm really not sure that "gave different demographics of users different responses to optimize product satisfaction" is considered in any way an "unfair or deceptive" practice "in or affecting commerce".
Replace 'product satisfaction' with ARPU and suddenly it could very well be. Charging users different amounts, even if it only appears that it might be racially biased, would be a great way to invite more scrutiny.
Ultimately, I figure the FTC probably has a few lawyers on staff, and they probably ran these messages past at least one of them. So they're probably a little more certain of the soundness of the messages than us armchair lawyers.
The FTC has decades of experience enforcing three laws important to developers and users of AI:
Seems pretty clear cut to me that they're saying you can't use the 'but the AI did it' excuse if the net result is that your product is violating these.