Hacker News new | past | comments | ask | show | jobs | submit login
FTC authorizes compulsory process for AI-related products and services (ftc.gov)
215 points by magoghm 6 months ago | hide | past | favorite | 183 comments



I thought that a bit broad language and searched for previous examples.

https://www.ftc.gov/news-events/news/press-releases/2022/08/...

> The omnibus resolution relating to car rentals will allow staff to investigate unfair and deceptive practices in that industry.

and a couple others; and:

https://www.natlawreview.com/article/ftc-authorizes-new-comp...

which lists similar measures covering:

> harmful business practices directed at service members and veterans;

> harmful business practices directed at children under the age of 18 years old;

> allegations of bias in algorithms and biometrics;

...so It doesn't seem like this is particularly unusual.


If it is harmful, why does lacking veteran status or other criteria make it less harmful and thus less infringing?


If there’s an effort to protect a specified group of people from a specific harm, why would you infer that the specific harm is considered less harmful to others outside the specified group? There are much more reasonable inferences available if you don’t reflexively view it through a competitive lens.


indeed.

I read it as "we're targeting specific scams aimed at veterans and others aimed at children" and my first thought was areas of the insurance industry selling "coverage" that doesn't cover.

Those scams aren't targeting the general population to begin with.


Because otherwise the group wouldn't be specified?


There’s a conspiratorial way of thinking about it, which says “they could have protected everyone, but chose to only protect this group, therefore they must think it is ok for these harms to fall on others, therefore [insert innuendo or conspiracy]”

And there’s the pragmatic way of thinking about it, which says “these concerns were raised by advocates for this group based on past experiences where this group saw abuse, and the conversation was never elevated beyond protecting this previously-victimized group from future abuse”

Thats not to defend what is probably bureaucratic myopia, just to say the reasons are more likely banal than strategic.


Id distinguish between the two by asking if the group is specified in policy or in incident documentation. The former would infer uneven application of policy, whereas the latter is what you're referencing.


Sibling response posited a rather compelling reason to specify the group otherwise. It’s by far the most obvious explanation that comes to my mind as well.


One of the main issues is that veterans who are deployed may, through no fault of their own, not be in a position to challenge say a court decision or otherwise exercise their right within the normal time limit allowed. Even if they are not deployed abroad, they may be forced to move at a moment's notice when other people usually have the time to make preparations and thus may not get mail immediately.


My opinion: the US strives to make their warrior class one of some sort of privilege, or at least appearing to have privilege. They have had various amounts of success in doing so. Thank you for your service, here's your veteran discount. Have you decided which college you want to spend your GI Bill at yet? Don't worry about scam institutions, the FTC will protect you.


One of the reasons -- or maybe even the very reason -- for college not to be free in the US as it is in the EU because it's the chief recruiting tool. Read Lauren Hough on more, she is much more eloquent on this.


A career in the military (and, to a lesser extent, shorter stints) is in several ways a route to access European-like social services.

- Guaranteed housing—including for your spouse and kids.

- Healthcare for same.

- Free college education.

The pay sucks, but the benefits are pretty damn good. I do think universal healthcare and free public university would take a big bite out of military recruitment rates.


The military has always been a caste, and will always be.


> The military has always been a caste, and will always be

The US made a very strong go, initially, at not doing that, with a cadre-style Army reliant on militia mobilization for significant action (and a somewhat similar situation for internal security).

The whole security forces as a separate caste with interests divorced from the population thing was something the founders were aware of and concerned about.


Fun fact, the GI bill is a revenue generating scam. Meaning, it makes more money than it pays out. I didnt know until bootcamp you have to PAY to get the gi bill and then they make it as complicated as possible to use so most people never end up using it. Under capitalism nothing happens unless someone's making money.


It's not complicated to use.

The most difficult part for most people is staying in school, which is made significantly easier by not having to pay for it.


FUD. I don't recall paying anything into the (Montgomery) GI Bill. It wasn't hard to use. I got $50K of basically free money out.

Granted I went to basic training over two decades ago. Maybe the Post 9/11 GI Bill is different? But I doubt it. What I've heard from fellow veterans is it's better in so many ways. Apparently I'm even eligible for a year of its use even though I depleted my original GI Bill (with kicker).


> Under capitalism...

Which part of capitalism causes the government to scam the state funded military. There doesn't appear to be any private ownership of capital involved at all.


I am not sure why the GI Bill started out as a free benefit, then they changed it into a revenue generating scheme. Its odd to say the least but wouldnt you agree that it at least "makes sense" under capitalism that the military is trying to increase its revenue streams?


Because veterans, children, people with illnesses or disabilities, people with or without children, religious denominations and ethnic groups, people over the age of 40, people with non-standard gender or sexual orientation, and a couple of other categories that I'm sure I haven't mentioned tend to have a history of being discriminated against in various arenas, and having fewer resources to address that discrimination.


So, 150% of the population? Are people who were dealt 2 and 3 in a poker hand a protected class too? Back in the day we just used to say life isn't fair, play with what you have in the best way you could. And it worked.


When was this back in the day? Prior to the 1972 amendment to the CRA? Because no, it did not work.


You probably have items in your house made by people working for $100 a month - which is less than what slaves in mid 19th century's America earned. But this is not discrimination, this is different, right?


>which is less than what slaves in mid 19th century's America earned.

slaves aren't paid


A skilled slave in Upper South could make $500/year, $20k in today's money.


>You probably have items in your house made by people working for $100 a month - which is less than what slaves in mid 19th century's America earned. But this is not discrimination, this is different, right?

Not trying to straw-man you but it seems that your point is that because it is possible to skirt US anti-discrimination laws by importing from overseas, the US should have no anti-discrimination laws?

That is not a good point. Would you please clarify if there is something I missed?


No, my point is, it is immoral that anti-discrimination sentiment stops dead at US border.


Interesting, that seems totally at odds with your earlier statement of "Back in the day we just used to say life isn't fair, play with what you have in the best way you could. And it worked."

Yes, I agree, anti-discrimination shouldn't stop at the border.


>Back in the day we just used to say life isn't fair

That's why back in the day sucked.


It doesn't. The adoption of compulsory process for investigation of certain potential violations doesn't make other violations less infringing. Its a procedural change dealing with investigations, not a change in the law governing the behavior the FTC regulates.


There's (ostensibly) a general current in American politics towards non-interference in business, even where it causes harm, but practical support for intervening in protection of certain groups seen as annointed, vulnerable, etc.

There's no secret to it.

If you're just not down with annoting vets with special treatment, that's fine and I think I'm with you, but your rehetorical question evaporates when you accept that politics is real and done in the open.


They agreed to curtail their liberty and risk life and limb on behalf of, and at the direction of, the state. Whatever you think of such a decision, it's absurd, especially for anyone who's ever taken compensation in equity, to imagine the state should not consider itself under some reciprocal obligation.


> There's (ostensibly) a general current in American politics towards non-interference in business

At what point in time has our economy been more regulated than it is today? I can't even think of a particular industry that has trended towards less government involvement. I suppose you could make the case that modern Republicans are relatively focused on "non-interference", but that's certainly not new (Reagan) and it's certainly not a general American political trend.

Really curious what you mean by this, because the way I'm reading this there is no basis in reality.


>At what point in time has our economy been more regulated than it is today? I can't even think of a particular industry that has trended towards less government involvement.

In a specific and literal 'number of regulations, period' way, you're right. But in a more general sense involving the FTC, prior to 1978 the economy was significantly more regulated.

So there may be more restrictions on the product to qualify as natural or organic, for example, but there are less restrictions on the ability of the company that makes the product to engage in non-competitive behavior. Banking is a good example - there are more restrictions and requirements for disclosure, but before the 90's, banks were not allowed to have branches in multiple states. Additionally, now you have a number of tech companies behaving like banks but acting in a significantly less regulated way that wasn't possible twenty-five years ago. From a 'whole economy' perspective, the required reserve balance reductions in the 90's resulted in less ability for the federal govt to control the economy without major shocks.

The government has less control over the economy today than it has had in decades, and since the late 70's a failure of antitrust enforcement and changes in financial rules have resulted in market concentration in every major market to a degree that they have all become oligopolies.


No worries. I think you're just taking the wrong sense of current.

Non-interference in business is the prevailing current in the US, at the federal level, since its founding. Interference needs to be constitutionally justified, and (historically) needs to not be better applied by more local jurisdictions.

This is why the past shows has comparitively lax regulation for long.

You're right that this is a more pressed issue lately and that we can see a local relative current towards regulation tempers that prevailing one, but in net its actually still relatively non-interfering compared to what you see in peer nations.

Leaning on the metaphor: The prevailing Atlantic current mostly goes north up the US coast, but you can still find local currents in all directions amidst that prevailing flow.


That clarified my misunderstanding, thanks!


> At what point in time has our economy been more regulated than it is today?

1930-1970.


Which sector of our economy is subject to less regulation today than in the period between 1930-1970?


Airlines, for one.

And that's all the history homework I'm going to do.


You got me there.


Advertising, if you consider it as a ratio of impact versus regulation. Advertising has a thousand fold impact on people today but not a thousand fold increase in regulation.


Banking?


Certainly not after 2008 and the Dodd-Frank Act.


They're not specifying who they want to protect from a general harmful business practice, they are saying who is targeted by a specific harmful business practice. So for example they're not saying protect only veterans from scam calls, they're saying protect everybody from scam calls that impersonate VA services in an attempt to scam veterans.


It doesn't, but opposing protections for children and veterans is widely perceived to be a losing political strategy.


Because the average citizen is not considered worth protecting in the United States. If you aren't of a protected class, you don't matter.


If you're really complaining that the guy in the wheelchair gets too much assistance in the form of a ramp to get into places or that children aren't treated the same as adults, you should take a hard look at yourself.


Somehow caring about every citizen is too hard? Nobody reasonable is against helping those who need it. It doesn't have to be at the cost of giving a shit about everyone else. But keep making up reasons to favor the status quo without activating any new synapses.


Let's get into the "at the cost of giving a shit about everyone else" part - do you think that, prior to anti-discrimination laws intended to create an even playing field, everyone got some kind of benefit that they no longer get? If so, you are mistaken. In the United States, we do not have and have never had a system that cares about every citizen. We just don't. What we have are limits to cruelty and a general concept of fairness as a base starting point for life.

ADA compliance especially does not give the disabled an advantage over others. It is an attempt to level things and an acknowledgment that assistance is necessary for some people to enter the workplace as otherwise they would not physically be able to do so. Most of the other protected classes are based on the Constitution - not including pregnant women and women that may become pregnant, as a sub-class, who are classically discriminated against in the workplace and elsewhere and forced out of their jobs on a regular basis.

So when someone, you in this instance, complains about "not being worth protecting" in comparison to a group that is routinely disadvantaged or innately vulnerable, it creates the appearance in people listening or reading their words that they are extremely naive and/or a cruel, mean person. This perception is something a person who is neither of those things should consider before speaking or posting about subjects like this one.


That's a lot of words to basically tell me to shut up, I'm a bigot.

Let's see where this country is when those like me rightfully sit and do nothing to defend it. There is a cost for acting like certain citizens matter more than others.


It wasn't, actually, it was saying that if you aren't a bigot you should be careful with your words because you are currently giving that impression.

People who are upset that handicapped and vulnerable people get help or are protected by the government don't usually stand up and defend anything at all, in my experience. So no real change there.


If people had reading comprehension they'd already know I'm not bigoted toward disabled people.

People want to make a problem because I dared to suggest our country protect all of its people. There's nothing morally wrong with my stance. I'm not bitching and moaning at a parking space or a ramp. I'm bitching about the fact my country will never act to protect me, because I don't fit the intersectional chart.

Identity-based protection is discrimination. Make excuses all you want but I'll abandon any group that won't offer me equal protection.


So you're just a socialist, then? Weird way to put it.


No, I want my country to honor its founding document, specifically the 14th amendment:

https://en.wikipedia.org/wiki/Equal_Protection_Clause


The 14th amendment isn't a part of the founding document, and it does. If you are in a similar circumstance to someone, you are treated the same. If you are in a wheelchair and someone else isn't, you are not in the same circumstances.

Your point really seems to be 'why doesn't the government give me exactly the same thing they give the less fortunate.' The answer is, you don't need it to put yourself in a position to compete. They do.


"products and services that use or claim to be produced using artificial intelligence (AI)..."

Interesting test of whether we suddenly see a decline in the number of products claiming to have or be produced using AI. Probably not until/unless the FTC actually starts using these powers in a significant way.

By the way, I don't mean cases where AI was used, lying about it, I mean cases where nothing meriting the term "artificial intelligence" was used, but they have been stretching the term so they can use the latest buzzword in advertising.


Interesting test of whether we suddenly see a decline in the number of VC pitches claiming to have or be powered using AI.

While the dumb money flows to anything with an A and an I in the pitch deck, I expect they will continue.


> "products and services that use or claim to be produced using artificial intelligence (AI)..."

Imagine a government agency that has oversight over "products and services that use or claim to be produced using electricity..." This may become the largest power grab in US history.


But every functioning government regulates electrical networks and devices in some way. People think it’s mostly reasonable since they don’t like being electrocuted or having things explode and catch on fire?


And those regulations only happened _after_ the dangers and harms were shown to be true and actually happens.

I would be wary of pre-emptive regulations.


AI has already hurt plenty of people though; we already have examples of biased "AI policing", artists and writers are getting their work stolen left and right, and I've heard tell of several instances of seriously vulnerable code making it to production and revealing folks' private information due to bugs introduced by AI assistants.

This regulation is almost certainly too far reaching and written by bureaucrats rather than experts, but the need for regulation is already there.


> the need for regulation is already there.

We need regulation that holds people accountable. I see too many 1%ers claiming they were not responsible for something their algorithm did.

In old sci-fi, a computer was the legal agent of its operator/owner and anything that computer did was 100% the fault of its owner/operator.

If we simply say "you are responsible for what your AI does," then the problem is solved.

We could solve the problem if not being able to bring corporations to justice at the same time. Forxe them to beck e proprietorships. Every owner/partner is culpable for crimes committed by that proprietorship.

Did a cop arrest an obviously innocent person? It's the cop's fault.

Did my web site plagiarize a painting? Then it's my fault.

Did a startup release venerable software? Then the crook that exploited the bug is responsible but so it the entity that released the software.

If we get back to common sense and stop blaming the Twinkie for the murder, things get a lot simpler. https://en.m.wikipedia.org/wiki/Twinkie_defense


Bad code making into production has nothing to do with ai, and everything to do with the incompetence of the coder.

As for "stolen" work, i am still not convinced that ai produced works are a derivative of the training data.

And AI policing is just as bad as human policing - the regulation isn't with AI, but with policing.


>i am still not convinced that ai produced works are a derivative of the training data.

Then what are they?


If you read a bunch of books and then write your own book, is that book a derivative of the books you have read throughout your life? I doubt too many artists have never experienced nor been influenced by other works in their medium prior to making their works, but that's not what we are referring to by derivative work. Typically derivative work means something like a movie based on a book, or a translation, or a parody, etc where something of the new work is clearly coming from a previous work. While an AI could certainly produce a derivative work, for example if you only trained an AI on a specific artist's style to produce new works that specifically emulated that style, at some point the link between a new work and those that influenced it is so tenuous that the new work is original.


>If you read a bunch of books and then write your own book, is that book a derivative of the books you have read throughout your life? I doubt too many artists have never experienced nor been influenced by other works

An LLM isn't an artist and I'm not a piece of software. But if you think the LLM output other to be copyrightable in its own right, who should hold the copyright? The person who wrote the prompt? The person who picked which entropy source the model used? The owner of the GPU that ran inference? The last person to provide input for fine-tuning the network?


> An LLM isn't an artist and I'm not a piece of software.

Hard disagree on both counts, but for the sake of argument let's go with your assumption.

> But if you think the LLM output other to be copyrightable in its own right, who should hold the copyright?

The LLM, being a piece of software, should logically be treated like any other piece of software used to create a work by a human artist. It is a tool. Let's say there is a song composed for a synth keyboard. Should the copyright go to the person who designed the keyboard? The person who owns the equipment that made the keyboard? The people who recorded the samples that each key stroke references? The person who curated the samples for the keyboard? The person who owns the keyboard? To the person who figured out the sequence of keystrokes that produces the song? Or to the person who hits those key strokes when the song is being recorded?

Well the person who designed the keyboard gets the ip of the keyboard's design. The person who owns the production hardware does not inherently get any ip. The artists who created all the works sampled get the ip to their specific samples. The curator would not get any ip for the collection but they might if they did something transformational, like passing those samples through a filter. The keyboard owner does not inherently get any ip. The composer gets the ip to the composition in general. The artist who played the song for the recording gets the ip for that specific recording of the song. If I were having a conversation with someone, I would say that the person who figured out the sequence of keystrokes is the person who holds the copyright.

Likewise, the creators of the LLM would hold the ip related to its design, but a specific output of the LLM would belong to the person who thought of a way to make it produce that specific output - ie the prompt writer. Those who made the inputs would own the ip to the part of the inputs which remained untransformed in the final work. Curating input for the network training would be equivalent to curating numbers for a phone book - the collection itself is not copyrightable but some transformational work (like cleaning the data in a specific way) would be. Owning the hardware the LLM was trained on would not inherently grant any ip rights.

Of course there is some grey area where it might genuinely be unclear who holds the rights to what (which has always been an issue with intellectual property laws), but the idea that just because it's sometimes unclear who contributed what to a work means no work was created at all is obviously folly. Others might have a different opinion than I on how the existing framework should be tweaked to accommodate this new technology, but it's definitely not a major departure.


> I've heard tell of several instances of seriously vulnerable code making it to production and revealing folks' private information due to bugs introduced by AI assistants.

By this standard, we should ban all integrated development environments because, like the AIs you mention, they allow programmers to produce hugely buggy code.


> artists and writers are getting their work stolen left and right

Please show me a few examples where a work was copied by an AI and the copy is so good it violates copyright law.

All I have seen is someone used a pirated dataset to train AIs. Suing over that is like suing Seagate because someone tested a prototype hard disk by storing pirated books on it.


> AI has already hurt plenty of people though; we already have examples of biased "AI policing"

Cops hurt innocent people. That's not the fault of the Chevrolet they are driving or the AI they are using.

Criminals will seek to obfuscate blame for their crimes to avoid getting caught, so police will frequently get the wrong person. Libertarians would rather leave it to lynch mobs, while statists want to fund better tools for the police. No solution is perfect.


I get what you’re saying that perhaps the verdict is not yet in. But it feels kind of bizarre that the position is “gotta wait until after enough people die.”


Should we arrest you for murder today, or should we wait until after enough people die?


Pretty much e.g. all electric chargers do have a certification, yes


The FTC already has the power you're worried about, this isn't a power grab. The FTC is a government agency that has oversight over all products, period. They then choose subsets to focus on.


Bigger than the department of commerce?


Looks like this is basically for the paperwork they need to compel parties to disclose things like business practices if there is a credible complaint against them. Probably until now they have had to go through another department or section even though they've stood up an AI-specific division. So this just makes it easier for the AI team to, say, compel a company making allegedly fraudulent AI claims to justify those claims, or face the same consequences another company might for making, say, fraudulent product effectiveness claims. Same capabilities, just directed at bad actors in this corner of the industry.


> AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Generative AI can be used to generate synthetic content including images, videos, audio, text, and other digital content that appear to be created by humans.

Finally we have a definition for what AI is.


So it looks like a store inventory tracker from the 80s which alerts staff to restock a shelf is AI, and subject to this.


If it says "with the past trends order 1000pices of product X"

While reaaaaally simple it is still a form of IA, I don't see nothing wrong with that

A advanced model would maybe see industry prices and consumer trends but it's the same in my eyea


It doesn't even need to do that.

It's not saying "predictions, recommendations, and decisions". It's or.

So it just needs to recommend or decide based on any data (such as how many are left at this moment in a database) in a way that has real world outcomes (someone stocks a shelf).

A thermostat that turns on an HVAC unit after temperature drops below a reference point technically qualifies as AI based on this definition.

It's maybe a bit broad.


> It's maybe a bit broad.

I've always thought the difficulty in defining what AI is stems from a need to differentiate humans from the "artificial" part of it.

The real issue with using AI (from a law enforcement perspective) is the inability to put somebody under oath and ask them why they made the decisions they did. All the FTC really needs to say is something like "If we suspect your product is discriminating against a protected class and you can't (as a company) explain what the decision making process was (that was non-discriminatory) we will assume the worst."


It is not because now AI means that.

If you want to say "true AI" you have to say "General AI"


Didn't know numerical analysis was actually AI all along. I'm glad we have wise overseers in the FTC to save us from the harmful (sorry, unsafe) effects of AI technology. Next time be more careful when you try plotting a graph of a function.


Can we now ban companies from marketing products as “AI” when in fact there is none?


I'm pretty sure that any company that has an if statement in their codebase can satisfy that definition so calling anything false advertising is going to be harder than it would have been before this was published.


I have a hard time thinking of any useful software that doesn't fit this.

But then, that's fair. What is stupid is using this as a rule to decide how a product is regulated.


Rules tables are ai and must be regulated.

So is excel.


> AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.

PID loop is AI by this definition: https://en.wikipedia.org/wiki/Proportional%E2%80%93integral%...


>AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations

Also regression, most of what makes computers fast (LRU cache, branch prediction), optimising compilers.

>decisions

Arguably this deems any program containing a conditional branch instruction as AI.


Thank goodness the law is interpreted by judges and not engineers.

No, I'm not joking.


Ideally the law is written in language that can be understood by the people who are expected to follow it not just judges. Preferably without fuzzy boundaries where they are not necessary, like to avoid injustice.

The complaint I see here is that wording, at least in the article, does neither of these.


Since what it does is set procedure for the FTC during FTC investigations, the people that are expected to follow it are...the FTC.

(Its also important to people subject to FTC investigations to determine whether they are obligated to respond to particular things that purport to be a particular form of administrative subpoena, but, I mean, at the point that your business is subject to an FTC investigation in which they are issuing things that look like administrative subpoenas, you probably ought to be engaging counsel to guide you in that process for more reasons than just "is this something we need to obey", and the substantive law being investigated is going to be a much more complicated thing that the one page order adopting compulsory process for a particular domain of investigations.)


> but, I mean, at the point that your business is subject to an FTC investigation in which they are issuing things that look like administrative subpoenas, you probably ought to be engaging counsel to guide you in that process for more reasons than just "is this something we need to obey", and the substantive law being investigated is going to be a much more complicated thing that the one page order adopting compulsory process for a particular domain of investigations.)

More clear wording would hopefully allow business to avoid getting some CIDs on this topic and/or streamlining the process after getting a CID. If you know hat you need to document to make it clear you are the right side of the law, hopefully that would make the entire process go faster.


Any organization that needs to worry about this ruling has access to a legal department who can interpret it for them.


As stated elsewhere. This is so broad that it would include people selling handmade weathervanes. Which do not have access to legal council let alone a department.


The FTC, of course, being long and famously in the habit of going after people selling handmade weathervanes.

Law is not code. They don't work the same. The longer you confuse them, the more confused you'll be.


Under that logic, why have rules at all? Let's just give the FTC the power to do anything it wants with no oversight and trust that they won't abuse that power in any way. It would certainly make it easier for them to do the things we want them to do, and after all, the FTC has no history of abusing the powers it never previously had.

Law is not code, law is law, and our legal system is based on the idea that by default the government can't do things, we specifically allow things if and only if we want the government to actually be allowed to do those things, and we limit what we allow as narrowly as possible while still enabling the goal to be accomplished. There is a long history of overly broad laws being applied in obviously unintended ways, with countless examples of horrific consequences, which is why we set up our legal system this way. We know that relying on reasonable interpretation is a bad practice that should be avoided as much as possible. It is laughable to dismiss such concerns as being rooted in a lack of understanding of how the law works, indeed I would describe anyone who does not have such concerns as naïve at best.


> We know that relying on reasonable interpretation is a bad practice that should be avoided as much as possible.

I have some bad news for you about the last couple hundred years or so of how the US legal system works, especially in but not exclusive to the aspects of case law and judicial review.

For future reference, the convenient shorthand for your position here is: "Marbury v. Madison was wrongly decided."


Whew, good thing I don't have a legal department!


I think when the law needs interpretation and is not clearly understood by the majority of unwashed peasants it should not exist.


I think there should be wasps the size of deer. Certainly either is about as likely.


Freeing the slaves was just as unlikely at some point


It really wasn't.


If I was a lawyer (and suitably charismatic) I could stand up in court and convincingly demonstrate that all the trivial examples in this thread are indeed Ai as written by the founding fathers of this legislation.


Well, also, the descriptive text in FTC press releases is't law, not even administrative law.


No, but it's a description of one party's opening position in the negotiation out of which law will eventuate. Still quite far from something the median engineer is well equipped to encompass, judging at least by a decade of experience on Hacker News.


Yep, the former are more useful to buy and it's more socially acceptable.


I mean that's at best a wildcard when it comes to saving us from our own laws; it's not like our judicial system has literally ever consistently demonstrated good judgement. It's by far the most conservative and reactionary of our branches of government. That's not even touching the absolute disgrace that is our current generation of judicial staff.


> Arguably this deems any program containing a conditional branch instruction as AI.

Correctly so. What is called “AI” is effectively a mess of conditional branch instructions, except with a twist that you cannot know what they are or directly control them


With the twist that we cannot know what they are or directly control them being the important bit.

A bunch of conditional statements are fine so long as they can be reasonably analyzed. It's the point where you stop being able to analyze them that you have a risk you didn't before.


Correct. And when you apply them at scale.


You're forgetting about the `const` function ;) A martingale is a process whose next expected value is equal to the current value. Suddenly, it's all AI!


If you have a general prediction system, and somehow configure it to predict const, then it's still a general prediction system.

If it can only do const, then it's not.

So I see no problem with the consequences of your observation.


Well yeah but that's undeniably AI when used to predict such a process, even if it is wildly primitive and incapable of being adapted to other things. It's all statistical* models.


It doesn’t even have to be software.

This camera tracking a goldfish to pick stocks is AI by this definition:

https://youtu.be/USKD3vPD6ZA

Update: A coin toss meets this definition.


Both the camera and fishtank are artificial, this is clearly AI. The goldfish would have picked different stocks in a natural habitat and when observed by another goldfish.


> AI includes, but is not limited to

https://en.wikipedia.org/wiki/Affirming_the_consequent

The FTC hasn't defined AI, they've specified some properties which they believe some types of AI possess.


Or idk a broomstick that's stood up just right so when it's windy outside it falls over alerting you to the wind and recommending you close your windows



My mechanical mecury thermostat is now AI!


So... any algorithm that leads to a prediction better than chance? That seems dangerously and unnecessarily broad.


> That seems dangerously and unnecessarily broad.

It's called government regulation.


Plenty of safety critical PID controllers out there though...


The carve out for services that claim to detect AI is interesting. If the FTC realizes that's snake oil, that's a good thing.


Well its possible:

https://arstechnica.com/information-technology/2022/11/new-g...

All LLMs are deterministic systems with a random seed. Get rid of the noise that is injected in the system and the outcomes become repetitive. That means that the output is a pattern, one that may be "detectable".


That's only if temperature=0.0 which is not the case by default. Otherwise the output has entropy.


The output is a pattern for a specific input*

They need to know the input...


It harms small enterprises greatly.


How?


It's a no brainier.


So explain yourself in that case, if it should be so obvious, it shouldn't take long.


The standard argument is that larger regulatory burdens create fixed costs that are inconsequential for large enterprises but are barriers to entry for smaller businesses. E.g. banks and pharma continue to be profitable despite large regulatory burdens, but it's nearly impossible to start a new bank or pharmaceutical company today.


To what extent is a raised barrier to entry a feature? High regulatory costs demand greater investment, which in turn demands more third-party eyes from an early stage in a company's development.

The last time we had a largely unregulated cottage medical industry, it precipitated billions of dollars in fraud [1][2][3].

Given the cost of the computing resources, we weren't going to be seeing any mom-and-pop non-GMO free-range AI shops anytime soon anyway.

[1]: https://www.usatoday.com/story/news/nation/2023/03/13/chicag... [2]: https://www.cnbc.com/2022/04/20/doj-accuses-2-in-california-... [3]: https://www.propublica.org/article/how-fraud-increases-medic...


> To what extent is a raised barrier to entry a feature?

From whose perspective? It's beneficial for large, established businesses and detrimental for prospective businesses.

> The last time we had a largely unregulated cottage medical industry, it precipitated billions of dollars in fraud

I don't understand how attempts to defraud covid-related government programs [1,2,3] and Medicare overspending [3] are related to the argument that regulatory burden disproportionately harms small businesses. I would also note that the healthcare sector is among the most heavily regulated in our economy, and that covid-era programs rushed into existence are exceptional.

Lastly, the FTC authorized compulsory process can be applied to all "AI-related Products and Services". So these "mom-and-pop" AI startups can (and already do) exist. Even if that weren't the case, this would be a straw-man argument against the point that regulatory burden disproportionately harms smaller businesses.


My last two employers have been startups in the defense and medical industries - both heavily regulated fields. Both have external investors, but neither is a slave to PE. As it pertains to tech (which is the relevant scope here, since we're talking about AI), the American regulatory atmosphere is not an outright deterrent to new businesses. Far from it.

> I don't understand how attempts to defraud covid-related government programs...

It's support for the notion that blowing away most barriers to entry in critical industries (like medical) would cause more harm than good. These testing sites grew on trees, and none of them provided accurate results in a reasonable amount of time. The reason COVID testing fraud was viable was because there was no oversight - force a little bureaucracy into the works, and fraudsters wouldn't see dollar signs. Not an elegant solution, but we live in an inelegant world.

> these "mom-and-pop" AI startups can (and already do) exist.

Maybe my phrasing is little too tongue-in-cheek. My goal here is to draw a line between "small businesses" amd "small tech businesses." These are wildly different things.

When non-tech people refer to "small business" they usually mean firms worth ~$1M or less. The family-owned bodega down the street, for example. I 100% agree that bureaucratic regulation hurts these small businesses much more than it helps society at large.

"Small tech business" has a totally different meaning. A Series A AI startup could easily be worth $30-50M. Regardless of regulation, it has high cost barriers arising from its compute needs, and the personel to wrangle those computers. Basic regulatory oversight constitutes a proportionally smaller cost for such a firm.


These aren't regulations to operate though, they define process within FTC investigations.


How is this particular thing an increase to the regulatory burden facing an new entrant?


Yeah the small enterprises that sell "AI detectors" that are not better than a coin toss and actively harm students or any "AI whatever crap" that is 10% call to OpenAI API 90% data collection for ads

Can't say I'm shedding a tear


also how much of OpenAI calls is data collection for ads


Is there a link to the actual legal text of the omnibus resolution? Are FTC resolutions publicly available?

I wasn't able to locate the referenced document after briefly searching the FTC legal library and the regulations.gov website. Apologies if I've missed an obvious link from the press release.


I can't find it published yet, but note that these resolutions are common, extremely short, and very standardized.

See this compilation of compulsory process resolutions: https://www.ftc.gov/system/files/attachments/press-releases/...


> AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments

by this definition my toaster is an AI


Are we properly evaluating the potential negative impacts of safetyism, including that of unfriendly nation states making faster progress than we can? I see these sorts of requirements through a skeptical eye.


We're not. Or we are.

We can't tell, because all of this is hand-waved and hypothetical.


A bit disappointing that not everyone falls for ai fud.


LLMs aren't FUD, they are literally the future we're betting on to keep Microsoft here doing the things that Microsoft does. Whatever those things are.


Yeah microsoft keeping doing what it does is not a desirable outcome.


More procedures to operate will limit competition


These aren't procedures to operate, this provides particular process within FTC investigations.


This is total opposite of accelerate.


If you drive off a cliff you only go faster for a brief period of time.


Lina Khan is the chair of the FTC and was just on Hard Fork podcast. Pretty interesting. I heard the episode this past weekend and today it occurred to me that the sheer incompetence demonstrated by OpenAI is actually something they would likely see as a threat.


She said her "p(doom)" was 15% (0.15), that's crazy high. Does she know what that entails?


So if I make an app using any type of predictive ML I have to go thru this?


No, the compulsory process applies to FTC investigations in the area (it basically makes a form of administrative subpoena available in that context), it isn't something that is compulsory for doing business in the area.


Right.

And typically, you only have to go through the investigation if someone brings a complaint about your product. You would have to be screwing up big time to rack up complaints like that though.


Or generating art and be found by the wrong group of enraged artists...


nah, it'll be selectively used against companies that piss the right the people off


I like this reply, it’s funny


What does this mean in the long term? (I'm not a lawyer.) More transparency in the AI industry or just an ability for FTC to subpoena AI companies?


Fascinating implications on search engines. An army of SEO professionals, publishers and all manner of companies that rely on Google for traffic will be interested in pushing for a process review of Google's AI systems.


How could this scale? Does the FTC choose what they will investigate?


The FTC responds to complaints from affected parties, then investigates the complaints, then if necessary works with the Dept of Justice to charge a violator or fine them.


> How could this scale?

It's a streamlining provision for FTC investigations in the area, so it inherently improves scalability.


It’s the threat of action that generally keeps companies/people inline.


What's that regulatory capture how did you move so fast?


Damn, spam is dead now.


We can dream.


The empire strikes back.


I had a conversation with ChatGPT about this announcement to help me understand it better. I found it helpful (seriously,) so I'm posting a link to the chat here. Apologies if this sort of thing is inappropriate for a comment thread on HN. Let me know if it's uncool and I won't do it again:

https://chat.openai.com/share/635b8198-166b-4c53-abb5-7d4050...


I think it's fine as long as one isn't sharing a long ChatGPT answer here verbatim. That tends to be pretty annoying (since we could just ask ChatGPT ourselves if we wanted to).


As long as we are sharing Chats from GPT, I am also having chats about AI risks.

I asked it to look into history for patterns regarding 4 different scenarios:

1. Carefully controlled technology -> Increased Risk

2. Haphazard release of technology -> Decreased Risk

3. Carefully controlled technology -> Decreased Risk

4. Haphazard release of technology -> Increased Risk

Some interesting points... but I thought it left out the careful control of literacy.

https://chat.openai.com/share/5516a6f8-c15f-449b-803b-69d68b...


Very helpful and I didn’t realize you could share like that


> AI includes, but is not limited to, machine-based systems that can, for a set of defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments

Based on this definition, we’ve had AI since the 1950s. So AI is basically any device capable of computation, according to the FTC.

While I’m not sure in this case it’s really a big change in regards to how the FTC conducts investigations…(I assume they can already subpoena whomever for any reason) it is somewhat alarming in that we’re going to see more and more government mission creep based on similar tenuous definitions of “AI.”

Seems to just be inviting abuses of power in the name of “safety” over basic computer tech we’ve had for decades.


Sounds like one more small — but useful — weapon in the war on general-purpose computing.


There seems to be a lot of commenters on this post that have forgotten that the law and law adjacent matters are not intended to be parsed by computers with exacting pendantry.

Believe it or not but a judge, prosecutor, lawyer, or clerk, even one inexperienced can be relied on to tell the difference between an "if" statement, an algorithm, and a multi billion parameter opaque model.

The law is better when it is written without exhaustive all encompassing definitions because it allows those who interpret the law to apply discretion. Discretion is important because it allows for situations that were not forseen by the original authors. It allows someone to say "I don't care what you want to call it, this very clearly is/isn't AI. "


I'd imagine most folks are concerned with the other way that discretion cuts--it also allows tenuous interpretations that are not in the spirit of the law or are based on controversial interpretations, judicial activism, etc.

As a lay person, I also object to unclear laws and processes because it's my repsonsibility to understand and follow them. How am I supposed to do that correctly if they're intentionally ambiguous?


The system isn't without flaws and unfortunately the American legal system is particularly rife with the. To stay on topic though, no 2 people will give you the same definition of "AI" and yet we're rapidly moving into the age of it.

As such the only way to regulate around it is on a "we'll know it when we see it" basis. I know HN doesn't generally care for regulations on technology but the undeniable reality is that the online space has a dramatic effect on the real world and that AI can have a dramatic effect on the online space.


This will most likely get thrown out in court. The executive branch can't grant itself new powers.


I certainly hope so. This is a selective-enforcement nightmare at best, an attempt to bend technology to narrative for nefarious political purposes at worst.


Good. So much attention has been given to nebulous concepts like AGI and alignment, meanwhile I’m sure if Nick Bostrom or Stuart Russell were being denied loans, apartments, college admission or surgeries they would understand that the problem isn’t in identifying or codifying human values, it’s that there are multiple competing and mutually incompatible human values (For example: I want my money but my landlord also wants my money) which play out in the political and economic spheres every day.


No, you want a place to sleep and your landlord wants money. What you said doesn't even make sense.


If it were so, people wouldn't take rent amount into consideration when renting an apartment.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: