Hacker News new | past | comments | ask | show | jobs | submit login
Keep your AI claims in check (ftc.gov)
760 points by deepwaters on Feb 27, 2023 | hide | past | favorite | 282 comments



“It’s an ambiguous term with many possible definitions.”

“Does the product actually use AI at all? If you think you can get away with baseless claims…”

Last I checked basic optimization techniques like simulated annealing and gradient descent - as well as a host of basic statistical tools - are standard parts of an introductory AI textbook. I’ve been on the receiving end of government agency enforcement (SEC) and it felt a lot like a shakedown. This language carries a similar intent: if we decide we don’t like you, watch out!


Yeah, it's pretty laughable that the source they link to for those "possible definitions" right away says this:

> AI is defined in many ways and often in broad terms. The variations stem in part from whether one sees it as a discipline (e.g., a branch of computer science), a concept (e.g., computers performing tasks in ways that simulate human cognition), a set of infrastructures (e.g., the data and computational power needed to train AI systems), or the resulting applications and tools. In a broader sense, it may depend on who is defining it for whom, and who has the power to do so.

I don't see how they can possibly enforce "if it doesn't have AI, it's false advertising to say it does" when they cannot define AI. "I'll know it when I see it" is truly an irksome thorn.

Deterministic if/then statements simulate a surprising coverage of average human cognition, so who's to say a program comprised of them is neither artificial nor intelligent? (That's hand-waving over the more mathematical fact that even the most advanced AI of today is all just branching logic in the end. It just happens to have been automatically generated through a convoluted process that we call "training" resulting in complicated conditions for each binary decision).

In general I like the other bullet points, but I find it really bizarre they'd run with this one.


> Deterministic if/then statements

This was the pinnacle of AI in the 80's. They called them "expert systems".


Principle problem being that expert systems required meticulous inputs from domain experts, codified by skilled engineers. People don't have time or startup capital for actual expertise...


And AI requires the same thing, we just call them data scientists and ML engineers. Using linear-ish algebra instead of decision trees doesn't change the the fact that you need time and capital to hire experts.


The big difference is that data scientists only work on the model architecture and data sources, whereas expert systems need people who have expertise in the subject matter itself. One of the biggest changes from 'old AI' to modern ML is that we no longer try to use human domain knowledge as much, instead getting the model itself to see the same pattern from data.


Isn't labeling still an important part of modern AI?


Yes, but there is a whole field of artificial intelligence called unsupervised learning that tries to identify labels without pre-defined labels. At the extreme end there are no externally imposed / defined labels and artificial labels are determined by empirical clusters or some orthogonal data pattern or algorithm. Unsupervised learning is much less effective and not as mature as supervised learning. In the case of LLMs the label is "next words" and it's inferred from a corpus of text.


I'd say labels (for supervised ML) are fundamentally different from rules (for expert systems), because

  - labels are easy to decide in many cases
  - rules require humans to analyze patterns in the problem space
  - labels only concern each data point individually
  - rules generalize over a class of data points


Which is very much unlike today’s training set inputs meticulously labeled by domain experts and curated by engineers?


Large language models are the thing the average joe in 2023 would call AI the most, and at the end of the day, if you go deep enough down the 500 billion parameters rabbit hole, it's just a "veryyyyyyy loooooong chain of if-then-else's" obtained after 10s of thousands of hours of computing time over basically all of the text generated by humans over 30 years of internet existence. I know it's not EXACTLY that, but it could be pretty much "recreated" using this metaphorical long chain.


This is kind of like saying every video could be recreated manually with painters.

That is, this is a ridiculous statement.


As you say - any computer program could be considered "AI". It's so ill defined that making the claim doesn't even mean anything.

The FTC is going to get hammered in a court if they ever try to test this.


A pocket calculator actually a superhuman AI in the microsecond it evaluates an equation. It's AI all the way down.


While I don't disagree with the basic premise ("AI" as a specific falsifiable term is hard to pin down due to the ubiquity associated with the term); I do think there are specific cut-and-dry circumstances where the FTC could falsifiably prove your product does not include AI.

For example, using an alternative of Amazon's Mechanical Turk to process data is clearly a case where your product does not use AI. Which I believe is more likely the kind of scenario envisioned when the author was writing that sentence.


On the other end of the spectrum, calling a feature of a product "AI" seems to imply some minimal level of complexity.

If, for example, a company marketed a toaster that "uses AI to toast your bread perfectly", I would expect that language to indicate something more sophisticated than an ordinary mechanical thermostat.


I would expect it to never burn that toast.


A toaster may not burn a piece of toast or, through inaction, allow a piece of toast to be burned.

A toaster must obey the doneness setting given it by human beings except where such orders would conflict with the First Law.

A toaster must protect its own existence as long as such protection does not conflict with the First or Second Law.


That would require the AI to perform at a superhuman level.


that would require the 'AI' to do something that computers are really good at - detect when a particular event (perfectly toasted) has been achieved via inputs monitored at the millisecond without deviation and then change a state - toasting to not toasting - based on detecting that event.


It makes sense to protect investors from falsely investing in new "AI" tech that isn't really new AI tech, but why do consumers need to be protected? If a software product solves their problem equally well with deep learning or with a more basic form of computation, why is the consumer harmed from false claims of AI?

To put it another way, if you found out that Chat GPT was implemented without any machine learning, and was just an elaborate creation of traditional software, would the consumer of the product have been harmed by false claims of AI?


Did you read the article?

One example given, was if the version “with AI” does not perform better than a previous version “without AI”.

So a precise definition of AI isn’t needed. Just that you cannot make misleading claims about your product behind the buzzword of AI.


If you buy a painting advertised as a Monet, you are similarly not harmed if it wasn’t actually painted by Monet. But people like to know what they’re buying.

Less sarcastically, info about how a thing is made helps consumers reason about what it’s capable of. The whole reason marketers misuse the term is to mislead as to what it’s capable of.


Yeah - it needs to be clear to investors if the tech will scale as the business grows and if the tech has a good chance of improving if trained on a larger dataset or ML techniques improve generally.

Consumers should care about if a product is able to solve an AI-like problem that normally requires domain knowledge. Shouldn't care if done by ML, rules-based systems, or people. (Except perhaps may want assurance the product will continue to be able to support them as the customer scales.) Also should care about how the decision-making works.


AI is not machine learning. And no, if someone could implement something similiar using gotos it would have the same value.


I know of at least one startup that claimed to use AI (including having AI in the company name), but in actuality humans did nearly all of the work. Hoped that once they got enough customers (and supposedly "proved the concept"), they could figure out how to use AI instead. I bet this is/was somewhat common.

I also see many (particularly "legacy") products say they're "AI-driven" or "powered by AI", when in actuality one minor feature uses some AI, even in the broadest sense.


Mechanical Turk is an artificial intelligence. Just not the kind you imagine. Besides, counting cars is hard, have a human do it.


To quote the points you left out:

> Are you exaggerating what your AI product can do?

> Are you promising that your AI product does something better than a non-AI product?

> Are you aware of the risks?

I'm guessing everyone here has come across examples of "AI" tossed onto something that either 1) 10 years ago wouldn't have been called AI or 2) the thought of something with a more recent interpretation of "AI" being core to the function of the product is a little scary and/or feels a little unnecessary.

Maybe it is a shakedown/warning. I think that's fair. We should have better definitions so that these agencies can't overstep, and products should have a better explanation of what "AI" means in their context. Until then yeah, vague threats versus vague promises.


It sounds like you may have missed the stampede of “AI” companies coming out of the woodwork the last few months.

For every legitimate AI project, there have been a thousand “entrepreneurs” who spend 4 hours putting a webflow site on top of GPT APIs and claim they’ve built an “AI product”. There’s no limit on the amount of BS benefits they claim. They seem like the same people who just finished running the crypto scams.

It seems quite obvious to me that this cohort is the target of this statement.


> spend 4 hours putting a webflow site on top of GPT APIs

GPT _is_ AI though, no? I would think that this would count. Might violate "a re you exaggerating what your AI product can do" or "are you aware of the risks" instead though.


> GPT _is_ AI though, no?

Not all of us would agree. We would only take that expression for a rhetoric simplification (a shortening for "part of a broad AI realm"). We would pivot near the concept of "AI" as "a problem solver that could do the job of a professional". This in way excludes e.g. "build convincing text", because it is not (or should not) be a professional task - though it can surely be part of research.

Doubts are possible on all four FTC points - plus more in the linked guidance post from E. Jillison (e.g. "Do more good than harm" - difficult measure on engines which have "relaxed sides").


> GPT _is_ AI though

Well, it does not at all have the "intelligence" part of "artificial intelligence", so not really.


From their own report that they linked to:

>In the 2021 Appropriations Act, Congress directed the Federal Trade Commission to study and report on whether and how artificial intelligence (AI) “may be used to identify, remove, or take any other appropriate action necessary to address” a wide variety of specified “online harms.”

>We assume that Congress is less concerned with whether a given tool fits within a definition of AI than whether it uses computational technology to address a listed harm. In other words, what matters more is output and impact. Thus, some tools mentioned herein are not necessarily AI-powered. Similarly, and when appropriate, we may use terms such as automated detection tool or automated decision system, which may or may not involve actual or claimed use of AI.

Quite hilarious really!


This is completely sensible.

It’s just saying the key criteria to evaluate is not whether the software is called AI or not. But what the software actually does.

Isn’t that just common sense?


It appears this is the current state of the FTC. Twist the definitions to their purpose.


To be fair, what they mean is the way marketers and advertisers use "AI". Which is often without regard for any specifics.


I find it hard to sympathize with companies whose websites are full of AI, blockchain, and quantum trash. Honestly, idgaf if they get shaken down. If you have a product that people like, just market your product based on its features, and remove all the BS about using <insert the buzzword of the day>.

If the FTC tells OpenAI to stop mentioning AI, I would be surprised. Even if that happens, I am sure ChatGPT will remain just as popular.


The comment about shakedowns isn't concerned about the bad actors being shaken down.


There is also the high level question of why exactly the government needs to police this. If it turns out that some Stable Diffusion frontend was actually sending the prompts to a team of Indians who happen to draw really quickly; that is no reason to get the enforcers involved.

If examined closely, the finger wagging in this post is remarkably petty. This guy was likely part of the angry crowd who didn't like Steve Jobs describing the iPhone as "magical". The standard should be "a lie that causes measurable, material harm", not some company exaggerated in their advertising. Advertisers exaggerate, that is just something people have to live with.


The problem is that this ends with everybody calling their product magic and the word losing its original meaning; soon after it will have a meaning closer to "disappointing" or "lame".


It doesn't really matter what the standard is... What matters is that there aren't some companies who push the limits far harder than others. If there are, then those companies who push the limits of what is allowed harder will be at an advantage, to the detriment of the public and the american economy as a whole.


> ...those companies who push the limits of what is allowed harder will be at an advantage, to the detriment of the public and the american economy as a whole...

Be careful with comments like that. I would remind you that [y]our performance claims would be deceptive if they lack scientific support or if they apply only to certain types of users or under certain conditions. You need adequate proof for that kind of comparative claim, too, and if such proof is impossible to get, then don’t make the claim.

If you think you can get away with baseless claims that [companies using these tactics are going to be at an advantage over companies that just make good/cheap/effective products], think again.


>If it turns out that some Stable Diffusion frontend was actually sending the prompts to a team of Indians who happen to draw really quickly; that is no reason to get the enforcers involved.

Well, if the enforcement agency is the SEC I would think that it made a good deal of difference to the actual value of your company?


I’m sure there’s a company out there who uses some linear equation in their app that they came up with by dumping the data they had into Excel and running the linear regression “data analysis” on it.

> “Before labeling your product as AI-powered, note also that merely using an AI tool in the development process is not the same as a product having AI in it.”

So is y=7.4x+5 an “AI” running inside our app or is it just the output from an “AI tool” FTC?


Replace x and y with matrices and wrap everything in a non-linearity. Swap the 7.4 and 5 constants for variables a and b and set their values by taking the partial derivative of the with respect to the difference between the ground truth value and the predicted y.

String together a bunch of these "smart cells" and observe that we can process sequences of data by linking the cells together. Further observe that if we have a separate set of cells (technically it's an attention vector, not quite a group of neurons) whose loss function is with respect to each individual token in the sequences, we can "pay attention" to specific segments in the sequences.

Add a few more gates and feedback loops, scale up the number of cells to 10^12, and you basically have a state of the art chatbot. Capiche?


Capiche? I agree with you! Haha.

I’m just curious where the FTC would draw the line. The root commenter made a good point that they seem to make a value judgement of what AI means. We can stretch that meaning pretty far if we want :)


>Last I checked basic optimization techniques like simulated annealing and gradient descent - as well as a host of basic statistical tools - are standard parts of an introductory AI textbook.

maybe the textbook needs to be investigated

that's meant to sound ironic no matter which side of the issue you're on


Given the ambiguity of the term it would actually be better if FTC didn’t step in at all. To let the term dilute itself in its own marketing, to the point where consumers don’t care about it at all or actively avoid products with “AI”.


None of the AI products use "AI". There is decidedly no actual artificial intelligence currently in existence.


I agree to a point, but also language changes and we are clearly seeing a change in the definition of what "AI" means.


That's true if you use the sci-fi definition ("machines that think as well as humans") but the technical definition is a lot broader than that. In academic terms, a sentient machine would be "strong AI" or "AGI (artificial general intelligence)"; we've had "weak AI" for decades.


Yeah there are company’s that just use old fashioned data scrubbing and ETLs that claim to be AI.


Their intelligence is clearly artificial


Exactly this.


You can't use simulated annealing or gradient descent in your product and claim that you have built something intelligent. That would be laughable and validate these kind of messaging from the government.

AI is indeed a very ambiguous and subjectively defined term. In my own personal subjective opinion anything that does not have survival instinct is not remotely intelligent. By that definition unicellular organisms are more intelligent than a Tesla self driving vehicle.


A person can certainly claim the product uses "AI". The currently used definition of AI might be absurd, but you can't say such a person is lying or deceiving.


Borg of those are literally textbook examples of AI.


I think we can safely assume techniques with analytical solutions computable in polynomial time are neither ML nor AI.

Linear regression is an important part of statistics but is still ultimately a couple matmuls and a matrix inversion in its basic form.


A good amount of what gets into an ML or AI course, or book, is a couple matmuls and inversion.


Bayesian regression is technically analytical, too, I suppose. Really exposes the blurriness of such a vague term as even "machine learning" let alone "AI".

Bayesian regression is often not really considered AI, unless it's incorporated in a more complicated pipeline (e.g. Bayesian optimization). Same goes for linear regression, then: alone it is just a model.


and there are many more things like this. Back in the day "expert systems" were AI. For any given piece of software, it will meet some definition of AI from some time period.

The term AI is about as concrete as "awesome".


Also video game AI’s. I like to use that as a quick test of a definition of “AI” because video game AI’s span a very wide range of levels of sophistication as well as algorithms, many of which look like deterministic decision trees, most of which don’t use any ML or even regressions.

Arguably that’s a different overload of the term “AI” the way it’s used in business, but I think it’s a good reminder that AI as a field has a long history that developed separately from ML and data science.


wow, i would not have imagined a gov't agency to be agile enough to release something like this. i'm kind of impressed. here's to hoping this isn't just all bark and there's some actual teeth to his.


Its incredible.

The language is not bureaucratic, its elegant, precise, and clear. Like written by a passionate person who cares about the topic, rather than a government drone.


Plot twist chstGPT wrote it lol


I went ahead and put in this prompt to ChatGPT (https://chat.openai.com/chat):

====================

> Write a press release from the FTC that warns companies to be careful of promises using "AI".

====================

FOR IMMEDIATE RELEASE

FTC Warns Companies to be Careful of Promises using "AI"

Washington, D.C. - The Federal Trade Commission (FTC) is warning companies that use artificial intelligence (AI) to be careful of making promises that they cannot keep. The FTC has received reports of companies making misleading claims about their products or services that use AI, leading consumers to believe that they are getting something that they are not.

AI has become an increasingly popular buzzword in the business world, with companies eager to capitalize on its potential benefits. However, the FTC is concerned that some companies are using the term "AI" in a misleading or deceptive way. This could be seen as a violation of the FTC Act, which prohibits unfair or deceptive acts or practices in commerce.

In particular, the FTC is concerned about companies that make claims about their AI products or services that are not supported by evidence. For example, a company might claim that its AI-powered product can make accurate predictions about future events or can diagnose diseases with high accuracy. If these claims are not based on sound scientific evidence, they could be deceptive and potentially harmful to consumers.

The FTC is urging companies to be transparent about the capabilities and limitations of their AI products and services. This includes being upfront about how their algorithms work, what data they use, and what kinds of errors or biases might be present. Companies should also be cautious about making sweeping claims that cannot be substantiated.

The FTC is committed to protecting consumers from deceptive or unfair practices, including those related to AI. Companies that make false or misleading claims about their AI products or services could face legal action, including fines and injunctions.

Consumers who believe they have been misled by a company's claims about its AI products or services can file a complaint with the FTC at ftc.gov/complaint.

Contact:

FTC Office of Public Affairs 202-326-2180


My God, we finally found a constructive use for ChatGPT. "Rewrite this government bureaucracy's muddle-by-committee-legalese document into something that sounds like it was written by a human."


I asked ChatGPT to summarize the article (prompt was "Summarize the following article in 10 sentences"):

> The Federal Trade Commission (FTC) has issued a warning to companies using the term "artificial intelligence" in advertising, highlighting that some AI products might not work as advertised. The FTC stated that AI hype is playing out in many products, from toys to cars, and chatbots, and companies should ensure they are not making false or unsubstantiated claims about a product's efficacy. The regulator is asking companies using AI in their advertising to ensure that they are not exaggerating what their AI product can do, promising that their AI product does something better than a non-AI product without adequate proof, or are aware of the risks. Additionally, the FTC is asking if the product actually uses AI at all, noting that merely using an AI tool in the development process is not the same as a product having AI in it. Advertisers should refer to the FTC's earlier AI guidance, which emphasized fairness and equity while cautioning against overpromising what an AI-based tool can deliver.


The original FTC piece sounds a lot more human than this ChatGPT press release. The original piece is understandable and gets right to the point.


True that, but I think we can agree that it is the exception, as bureaucratic texts go.


can't wait for AI legislation, perhaps we can replace all the interns writing it now


For once the URL is actually working!


I paid for the ChatGPT Plus version the other day, I wanted to use it reliably.


"a sucker (fool) and their money are easily parted" -- PT Barnum


Yeah, us and our stupid lying eyes…


maybe the FTC should investigate OpenAI's claims of uptime


This reads much better than the actual piece XD


No, it really doesn't.


They did say “new-fangled” though.


Slightly obnoxious too … specially the last sentence.


Perhaps a little too elegant, precise, and clear... wouldn't it be deliciously ironic if this blog was written by chatgpt?


ChatGPT is sometimes precise and clear, but I've yet to see any writing come out of it that I would describe as elegant. It tends toward the mediocre-college-essay end of the writing quality spectrum.


In my experience, chatgpt is incredibly wordy / verbose. This doesn't read as chatgpt output at all.


Note that for the banking industry various “semi regulations” have been in place spawning a cottage industry for “Model Risk Management”. Basically you have to explain how you (attempt to) keep bias out of your training, re-test often, etc. Point is here that as a bank you don’t stop providing loans to certain demographies etc.

https://www.fdic.gov/news/financial-institution-letters/2017...

https://occ.gov/publications-and-resources/publications/comp...


> here's to hoping this isn't just all bark and there's some actual teeth to his.

Let's see what the article says:

> When you talk about AI in your advertising, the FTC may be wondering, among other things: (...)

Just "wondering", no teeth.


"Wondering" = discovery = you're going to have a bad time


yeah, the last thing i want is the federal gov't wondering about me


you can be assured that's how the FTC points out the teeth


Lina Khan is running a new FTC and she's incredible


> wow, i would not have imagined a gov’t agency to be agile enough to release something like this.

The FTC is actually pretty good about doing things like this when a wave of similar types of borderline business practices erupt.


Jokes on them, I'll call my AI product Full Self Thinking.

It's $10,000 if you buy now, but we'll be raising prices to $15,000 by June.

And yes, it will just be a worst version of the latest ChatGPT, but I'll hide it in a black box and continually tell you that it's so far ahead of the competition, we can't even see them in the rear view mirror!


But, they're abominably bad at actually doing anything other than saying they might care.


> But, they’re abominably bad at actually doing anything other than saying they might care.

I would say hot & cold, and often driven by policy focus of the Administration (or at least the FTC majority, which is influenced by the party in the executive, but not necessarily completely aligned with the executive in detail.)


On the contrary, FTC seems to have given up on pressing problems for the American consumers (like price gouging in broadband, generics, etc.) and is instead going after "trendy" topics that will get them and their leaders more press coverage.


The FTC wasn’t doing anything for decades. It’s doing a lot now… like it was made to do. Unchecked monopolization of so many markets has been terrible for the economy.


I had the same thought!

Then, my next thought was:

Who (in industry) wrote this? (and got FTC to publish it? wanting to drive some agenda forward).


> By Michael Atleson, Attorney, FTC Division of Advertising Practices


It's not always industry, sometimes agencies are closer to academia, certain types of lawyers (maybe this is considered industry too), etc. But seems in line with Lina Khan's efforts


I hope people develop a better understanding of this than the bloody coin fiasco. Society can’t effectively deal with something it considers magic - even a low resolution understanding + accessible expertise will help massively (think law or medicine). Take Chat GPT. In the case of humans, language is just a way to express knowledge. In ChatGPT its the other way around - the focus is the correctness of the word chain. It happens to get some things right because other people have chained similar sentences together in expressing that knowledge. Even if it’s “lying” to you as long as the sentence is correct and the paragraph reads nicely, it’s done the job it was designed to do.


I have no higher hopes that AI products will be regulated any better than cryptocurrencies. Fundamentally they both operate at breakneck speeds which regulators have proven they can only, at best, react to damage being done. Maybe they surprise us all and proactively regulate AI well, but they'd have to prove that instead of being taken at their word like this press release. Technology's speed combined with human greed is what allows it to do damage before regulation knows what's going on, and that fundamental difference isn't likely to change.


We will see regulation progress once there's a proof of concept AI that kills many people or causes great property damage.


> Society can’t effectively deal with something it considers magic

Up until very recently, society considered everything magic. Many still do.


We are working on an AI product in a highly regulated industry (Investing). Recently we have been experimenting using the GPT3 API as a Junior Equity Analyst. On an eyeball check the results of the technology are impressive.

The problem is that there is no way to validate the feedback on scale. I.e., we can't receive statistics about the feedback from the API.

In contrast, for our own Entity Recognition models we can (and do) calculate probabilities that explain why a certain entity is shown.

Hence, I think for API users of GPT3, OpenAI should return additional statistics why a certain result is returned the way it is to make it really useful and more importantly compliant.


GPT is a voluble, entertaining bullshitter that is occasionally correct. It is not reliable, and for now, it cannot be made reliable.

It is a very impressive toy, but still just a toy for now.


It can be made more reliable.

Here’s how:

https://github.com/williamcotton/empirical-philosophy/blob/m...


For now GPT is creating the filler content that moves the Web in 2023. But, given the results I have seen from our PoC's, it can do more and will do more in the future.


> that is occasionally correct

It's almost always correct, otherwise it would be worthless.

Yes, it's possible to construct questions that lead to nonsensical answers, and sometimes nonsensical answers are given even to sensible questions, but saying that ChatGPT's answers are "occasionally" correct is weapons grade BS. ChatGPT is a hair's breadth from being an ultimate answer machine, and is far more likely to be correct on almost any question than the average human.

The FUD that is currently being manufactured around language models is insane. I guess we should all stop using search engines, since those are even less reliable.


It's almost nearly always wrong in my experience. Like if I ask it a basic technical question, it gets stuff mostly correct but if you look too much at the details, then errors start popping up. If you ask it something relatively complicated, it's nearly always wrong.

Like as an easy example of the latter, I asked chatGPT "What is the S4 model in NLP?" and it gave back

> The S4 model (Simple Synchronous Syntactic-Semantic Sentence Simplification Model) is a model in natural language processing (NLP) that aims to simplify complex sentences while preserving their original meaning. The model was introduced in the paper "A Simple Synchronous Syntactic-Semantic Sentence Simplification Model" by Siddharthan et al. (2014).

> The S4 model consists of two main components: a synchronous grammar and a semantic model. The synchronous grammar is used to identify the syntactic structure of the input sentence and generate a simplified version of it. The semantic model is then used to ensure that the simplified sentence retains the original meaning.

> The S4 model is designed to address the problem of text simplification, which is the process of transforming complex text into simpler text that is easier to read and understand. Text simplification is important for making information more accessible to people with cognitive disabilities or limited literacy skills, as well as for improving readability for non-native speakers of a language.

> Overall, the S4 model represents an important contribution to the field of NLP, as it provides a simple yet effective approach to sentence simplification that can be used in a wide range of applications.

However, all of the above statements are like complete nonsense. The citation doesn't exist and the S4 model that the thing is saying exists doesn't.

I have found search engines to be significantly more reliable (with the ability to actually check sources) than chatGPT. Usually, when I get decent answers from chatGPT, I would also get the same from Wikipedia which is usually a top result on most search engines. Do you have an example of an entire class of technical queries where chatGPT might perform better?


> "What is the S4 model in NLP?"

99.99% of people would be unable to answer that question (without looking it up, I mean). Such hyper-specific queries for highly technical information from niche fields say very little about the model's overall performance at natural language tasks.

If you ask things like "Which of these animals doesn't live in Africa?" or "What is the most reactive chemical element?", ChatGPT's answers are almost always correct. And they are far more likely to be correct than the average (unaided) human's.


We already had Watson for Jeopardy-style general knowledge quiz questions a decade ago. It didn't revolutionize anything.


Update. This morning I asked ChatGPT what day today was. It answered correctly. I then asked how it could know that given that its training data ends in September 2021. It said it was based on the number of days since its training data ended. I pointed out it still had no way of knowing that number of days if it had no knowledge past September 2021. It kept apologizing and repeating the same story over and over.


ChatGPT is almost always bullshitting if you ask it to create a complete list of something with more than 10 entries or so.


I'm not sure exactly what the ask here is.

>In contrast, for our own Entity Recognition models we can (and do) calculate probabilities that explain why a certain entity is shown.

>Hence, I think for API users of GPT3, OpenAI should return additional statistics why a certain result is returned the way it is to make it really useful and more importantly compliant.

For LLMs, you can get the same thing: the distribution of probabilities for the next token, for each token. But right now we cannot say why the probabilities are the way they are, same goes for your image recognition models.


The problem in a nutshell, and the one the FTC had pointed out, is Model explainability. I was working in the past of an AI for automated lending decisions. We were asked to be able to explain every single decision the engine took.

If now a news article reaches our AI engine, it will tag, categorize, classify, and rank this news article. All based on models that are explainable.

LLMs, at least how I personally implemented them in the past, create a huge black box that is largely non-explainable.


A blogger (i.e. experienced ML professional in fintech) published an excellent write up of AI in fintech in December - his basic take was that there's a lot of room for the tech to grow before it becomes truly ubiquitous, because answers in finance must be correct 100% of the time. Worth a read!

https://chaosengineering.substack.com/p/artificial-intellige...


You can return log probs per token generated. This can be used to asses the confidence the model has in handling tasks which involve nominal data.

If that’s not helpful, were you getting at having the model return some rich data about the attention weights that went into generating some token?


For most of our models we return more information. Especially if you look at it from a vendor/customer perspective I believe this to be quite important.


And what happens when you run out of analysts with 3 years of experience in <checks notes> three years?


[flagged]


It is a regulated industry. Compliance exists for a reason. As we have seen during the crypto fallout... ^-^


How are the companies of tomorrow supposed to innovate when they’re strangled by behaving responsibly??


it hasn't been a problem for the social platforms or other companies operating in the hovering of personal data for profit. the trick is to get too big to fail status before gov't agencies can come shut you down.


> the trick is to get too big to fail status before gov't agencies can come shut you down.

While avoiding getting bought by Elon \o/


Why would you want to avoid that? Get paid the GDP of Lithuania for a fairly stodgy messaging platform with a big question mark over how many real users there even are and then getting to look like a genius in comparison to the next guy's run at it sounds like a great deal to me.


Haha yes, if you are at the top level! If you are just a normal employee getting paid 2-3x more than what you would get if you were working in most other companies, probably it's less fun to get bullied, then fired, and then to get a more decent salary for your work.


is that a concern anymore? he's so leveraged with the Twitter purchase, i doubt he could borrow enough money to buy a Tesla


That's probably a relief for many tech employees who won't get fired for no particular reason, then :D


I like the "no particular reason" concept you have. It's absolutely not true, but I still like it. The company over staffed because it was trendy to do so. The company was not profitable. Seems like a pretty good particular reason to me.

How it was handled can be discussed as being good/bad/fair/unfair/whatevs, but the numbers didn't like on revenue, just on the user counts.


You must have missed the many announcements of multiple thousands of layoffs happening at many big tech companies lately ;)


The "behaving responsibly" part is about the marketing, not the innovation.


It's a bit harsh, there are great examples of companies behaving responsibly: just look at Amazon or Twitter, or oil companies. Even the FIFA! /s


I think this is great and as a practitioner, something I've incorporated and demand in all of the contracts I work on is a clear and explicit validation and uncertainty analysis. It was always an issue when I worked at big corp that there was pressure to bury the lede when it came to uncertainty/ propagation of error. I considered it to be misleading and since then I make sure that any work that I'm a part of has in explicit language exactly how we'll analyse and approach bias and uncertainty as a part of the original work. I also try and make sure that assessments of uncertainty are 'part of the product' so there is no getting around things and every prediction also comes with an 'I told you so' layer.

I'm an absolute proponent of the kinds of products that get labeled as AI. I also think that useful predictions can be made in the face of uncertainty, because lets be real, human assessments, predictions and decisions also come with a degree of uncertainty (we usually just fail to rigorously quantify them).


This kind of post from the FTC is refreshing. I appreciate how authoritative it is.


Authoritative and the intro is even poetic.


Surely these overlords are true masters of irony and this was written by Chat GPT.


Just me, or does this read like a Godfather-style warning?

"Nice AI product you got over there. Would be a shame if something happened to it."

But what's different about this AI product/hype cycle compared to every other cycle that tech has had? Why does the FTC feel the need to make proactive enforcement warnings about this one? What about crypto? Or going back further, the dotcom era?


> But what’s different about this AI product/hype cycle compared to every other cycle that tech has had?

That its a hype cycle that is happening now, with a lot of shady actors overselling things.

FTC has done things like this during other cycles, too. Letting industries know they are aware, and simultaneously letting the public know to be aware (often, they’ll provide multiple messages, some specifically worded toward industry and others toward the public, but even with just one the message works both ways.)


Maybe they think not doing this in the past caused problems so they're trying something new?


Very good question.

On one hand, the US is surely happy that they’re the epicentre, on the other hand the potential impact on unemployment (fiscal and social unrest) is massive….


Ironically, here in Australia we've just been having a "Royal Commission" (high end public legal investigation) into how the government used an algorithm to pretty much screw over a bunch of people on welfare (several years ago).

https://robodebt.royalcommission.gov.au

So, an "AI" (or ML, etc) version of said scheme could also impact "unemployment" in some further crappy ways too.


There hasn’t been a functioning FTC for decades.


> If something goes wrong – maybe it fails or yields biased results – you can’t just blame a third-party developer of the technology.

Implicit shoutout to all the “slap a front end on ChatGPT” products out there.


Serious question: if I keep all known protected class information out of my pricing model, and it still decides to charge macOS users more for the product than Windows users, is it discrimination? Is it unfair? Now if I go and look at real outcomes and add back in the protected class information that was withheld from the model and run an analysis, and the results turn out to be post-facto biased, is it discrimination? Is it unfair?

Doe the FTC have an objective and consistent framework for evaluating this stuff?


> Doe the FTC have an objective and consistent framework for evaluating this stuff?

No. If they say that, they are lying or ignorant.

https://fairmlbook.org/tutorial2.html


I don't understand. This is, like, the government, making a rational and proactive warning to business about not doing shady stuff under cover of the latest buzzword. It's like there's nothing you can rely on any more.


This is fascinating, it’s a bit refreshing to see plain text for normal people. I can’t be the only person with extreme anxiety reading Sam Altman’s twitter and wondering if my career choice was the wrong one some days. It would be nice to see if this FTC post tamps down that hype machine?


> reading Sam Altman’s twitter

He has made a pronounced and clear shift to a “thought leader” account after the OpenAI-MS investment.

That can only mean that OpenAI is dying.


It’s been so strange to see this shift. I’m not sure if it’s true or not but it’s how I feel. He sometimes will just say stuff out of the blue. It makes me wonder what is hype, and what is real.


He built a language model on corporate executive communications and internalized it


Thoughts:

- Yay, in spirit.

- I already have an “AI” toothbrush. If that’s the bar, it’s not high. If the bar is different for Oral B because they’re big, that’s bad.

- Diluting or fuzzying useful terms (like “crypto”) is bad. (Why should I change, they’re the ones who suck).

- Inventing new terms, like “smart” and “Big Data” is actually positive, because they are easy to translate to their actual semantics (woo woo and gimmick).

Personally, I already assume AI = mumbo jumbo (unless proven otherwise), but I can see how it’s annoying for people working in the field.


A major cost of technology is responsibility. E.g Facebook's cost w/ moderators, Google's cost w/ customer support for user's life data, etc...

Companies are more profitable whenever they can shift that responsibility to the end user or society-as-a-whole.

Technology empowers the individual at the cost of increased personal burden. Often it's not possible to opt-out either for economic of social reasons so responsibility requirements keep increasing

This isn't necessarily good... There's a point where the pressure becomes toxic especially to the life force of children, the young, and aspects associated with the young like carefree spirited living, open-mindedness, lack of existential-crisis', etc...

Everyone becomes "older" and more plastic and robotic, concerned with things like social status, making money, number optimization, etc... (scenes from The Little Prince) and have to deal with things like existential crisis' and depression at a younger age.

An explosion of AI tools is likely going to accelerate this trend...


> Advertisers should take another look at our earlier AI guidance, which focused on fairness and equity but also said, clearly, not to overpromise what your algorithm or AI-based tool can deliver. Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.

Sounds like a completely detached from reality statement. There are tons, significant ratio of marketing statements unsupported by evidence and nothing FTC does prevents this from happening.


I mean cool. But honestly what is this "fairness and equity" bullshit? Trying to nip false advertising in the bud is one thing and I believe something the FTC has authority over. But who in the world granted the FTC the authority to pursue the fairness and equity of any product, let alone an AI product? I mean I'm not anti-fairness but I'm also astonished to see the FTC trying to regulate it. Thou shalt not sell a product that doesn't achieve equity? Really?!


If you actually go read the linked article, it's pretty clearly spelled out:

The FTC has decades of experience enforcing three laws important to developers and users of AI:

    Section 5 of the FTC Act. The FTC Act prohibits unfair or deceptive practices. That would include the sale or use of – for example – racially biased algorithms.
    Fair Credit Reporting Act. The FCRA comes into play in certain circumstances where an algorithm is used to deny people employment, housing, credit, insurance, or other benefits.
    Equal Credit Opportunity Act. The ECOA makes it illegal for a company to use a biased algorithm that results in credit discrimination on the basis of race, color, religion, national origin, sex, marital status, age, or because a person receives public assistance.

Seems pretty clear cut to me that they're saying you can't use the 'but the AI did it' excuse if the net result is that your product is violating these.


If you actually go read the FTC act, section 5 is about preventing unfair methods of competition. It's massive stretch to interpret that as "deploying an unfair consumer algorithm".


You missed the second half of 15 USC § 45.a.1:

(1) Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful.

Now, IANAL, but it seems to me that the example the FTC gave about racially biased 'AI' would fall well under the second clause.


Not a lawyer either, but take a look at this one's definition of "in or affecting commerce" under the Business Activities, Sometimes section:

http://www.stephenson-law.com/news/“-or-affecting-commerce”-...

> The proper inquiry is not whether a contractual relationship existed between the parties, but rather whether the defendant’s allegedly deceptive acts affected commerce.

I'm really not sure that "gave different demographics of users different responses to optimize product satisfaction" is considered in any way an "unfair or deceptive" practice "in or affecting commerce".


> I'm really not sure that "gave different demographics of users different responses to optimize product satisfaction" is considered in any way an "unfair or deceptive" practice "in or affecting commerce".

Replace 'product satisfaction' with ARPU and suddenly it could very well be. Charging users different amounts, even if it only appears that it might be racially biased, would be a great way to invite more scrutiny.

Ultimately, I figure the FTC probably has a few lawyers on staff, and they probably ran these messages past at least one of them. So they're probably a little more certain of the soundness of the messages than us armchair lawyers.


If you're looking for the actual basis for this, that would be the Civil Rights Act of 1964 and the Americans with Disabilities Act, and all of the numerous ways in which these doctrines are required to permeate how the executive branch protects citizens against many forms of discrimination. The US has sweeping anti-discrimination laws, and they apply to private business too.

This shouldn't be a surprise to anyone who has learned about US history.


Suggest you read the link they provided, since they clearly to have legal authority over these topics as defined by US legal code:

https://www.ftc.gov/business-guidance/blog/2021/04/aiming-tr...


Can you link me to the text of the FTC Act that backs the claims made in the article that the FTC has authority over unfair consumer "algorithms"? They are quite editorialized. I responded in a sibling comment to someone saying the same thing. The only text I can find in the FTC act is related to unfair business practices with respect to competition against other businesses. Not "biased algorithms".


Did you even read “Section 5 of the FTC Act” since to me it’s as simple if you’re willfully ignoring information you already have:

https://www.law.cornell.edu/uscode/text/15/45


Which part of this:

> Unfair methods of competition in or affecting commerce, and unfair or deceptive acts or practices in or affecting commerce, are hereby declared unlawful.

applies to an "inequitable AI algorithm"? I'm really serious, this section talks about unfair competition between businesses and unfairly or deceptively affecting commerce. Here's a lawyers definition of "in or affecting commerce": http://www.stephenson-law.com/news/“-or-affecting-commerce”-...

Specifically, it's not illegal to make a product that treats people unfairly. But it's illegal to treat people unfairly in commerce. My reading of that is that if I offer my shitbot to you fairly and non-deceptively, and it treats you poorly, that's not the FTCs concern.


That, as the article clearly lays out, is a separate but closely related subject. It's a concern founded on the same common misconceptions about AI that they are talking about here.

Machine learning models are not well equipped to handle the issues of prejudice or fairness, because those are, in the most literal sense, subjective.

Machine learning models as a feature do not interact with subjects. Subjects are too computationally hard, so they were optimized out instead.

As a latent result, any behavior implemented with machine learning models must be carefully curated, otherwise subjective patterns like prejudice and groupthink will pollute the model.

This is a trivially predictable continuation of prejudice as a language feature: it's functionally much easier to generate a continuation out of a biased circular position than it is to generate one that includes objective consideration.

In fact, objectivity itself is a feature that has not been implemented yet in any AI software. Prejudice, on the other hand, is so easy to implement that it happens by accident.


In related news, Tesla has been forced to tell the truth about "Full" Self Driving:

https://electrek.co/2023/02/27/tesla-disappoints-owners-upda...


Next the FTC will be regulating the use of the word "awesome". It's about as concrete as "AI".


I regenerated a ChatGPT response with the same prompt and got some totally made up quotes from the (real) FTC Chair. Just remember, it's a talking machine, not an intelligent one.

FTC Warns Companies to Be Careful of Promises Using "AI"

Washington, D.C. - The Federal Trade Commission (FTC) is warning companies to be careful of making deceptive or misleading promises regarding the use of artificial intelligence (AI) in their products or services.

As the use of AI becomes increasingly prevalent in a wide range of industries, companies may be tempted to tout their use of the technology as a selling point. However, the FTC cautions that such claims must be substantiated and truthful, and not likely to mislead consumers.

"The use of AI can bring significant benefits to consumers, but it is important that companies are transparent and truthful about the capabilities of their products and services," said FTC Chair Lina Khan. "Claims that are unsubstantiated or misleading can harm consumers and undermine trust in the marketplace."

The FTC has issued guidance to help companies understand their obligations under the law when making claims about the use of AI. Among other things, the guidance advises companies to ensure that any claims about the capabilities of their AI systems are backed up by sound science and evidence.

The FTC also warns against making claims that suggest AI can perform tasks that it cannot. For example, a company cannot claim that its AI system can accurately diagnose a medical condition if it has not been adequately tested and validated for that purpose.

Finally, the FTC cautions that companies must be transparent about how they use consumer data in their AI systems. Consumers have a right to know what data is being collected, how it is being used, and how it is being protected.

"AI has the potential to transform many aspects of our lives, but companies must use it responsibly and ethically," said Khan. "The FTC will continue to monitor the marketplace and take action against companies that deceive or mislead consumers."

The FTC's guidance on the use of AI can be found on its website at www.ftc.gov.

Contact: FTC Office of Public Affairs Phone: 202-326-2180


Did your 2017 ICO not work out? Were you a little late to the NFT party? Now is your opportunity to get involved in the extremely profitable world of AI!


The FTC shouldn't be threatening citizens and companies in random blog posts. It's inappropriate.

The FTC appears to have some serious problems at the moment.


FTC is a regulator whose "principal mission is the enforcement of civil (non-criminal) antitrust law".

So it is completely fine for them to remind citizens and companies of their obligations.

Especially given in the AI space there is so much questionable behaviour going on.


This is not a reminder. It's a casual threat.

And minor nit - this has nothing to do with antitrust. If you were going to highlight their mission, at least show the deceptive practices part.


In this case they aren't mutually exclusive. A government agency reminding the public it has legal authority to regulate is always threatening.


It's the casual part that is inappropriate. The government should not casually threaten the citizens.


You keep using the word casual here, but what exactly do you mean by that?

The statement seemed anything but casual. It has likely been worked on and vetted by multiple people, and while it does have a somewhat breve and light tone, I don't think that qualifies for 'casualness'. For a statement meant for a broad audience, the tone seems appropriate enough.


It's a blog post. It has a breve and light tone. There aren't many ways to make it more casual. Perhaps a tweet?

Legal threats shouldn't be made this way, especially by a government toward citizens.


Most people don’t understand how regulatory agencies actually operate (banditry), but most people do vote. So they have conflicting goals; on the one side, they need to create a paper record so that judges will approve their future banditry, but on the other side they need to disguise their intentions from the public. This blog post by the FTC is unusually well done and effectively meets both goals, so I disagree with you that the threats shouldn’t be made this way as it is clearly in their own benefit to do so.


I find your post deeply ironic because I have found a number of AI proponents to be the source of casual threats to their co-citizens ala "We're going to use AI to automate you out of a job. Better learn to adapt by paying us to use our tools yourself. If you can't hack it, sucks to suck, lol".

One company was encouraging people with a contest to take some guys work to train models and see who could best replicate it. They sent an email to the guy to let him know and say how unfortunate it was he was their target but offered to work with him to train an "official" model.

Casually threatening is how I'd describe those interactions, not this FTC post. Not suggesting that you approve either by the way.


Would you prefer it be more threatening language...?


Agreed, good way to put it. Definitely a very weird post.


Isn’t it their purpose to set the boundaries ?


https://en.wikipedia.org/wiki/Federal_Trade_Commission

The first section is a decent summary.

Blog posts making casual threats about AI claims is a long way from appropriate. There are places for formality, and the law is one of them. There are serious people quitting the FTC over current leadership, claiming they engaged in behavior that is wildly overstepping their legal authority.


It is not only appropriate but normal for regulators to try to affect behaviour without having to resort to enforcement activities.

And an informal blog post in a less-legalese tone is a welcome way to do this.


There is no place to make informal threats in blog posts. It's not welcome at all.

Nobody said anything about enforcement activities. The FTC have all kinds of mechanisms to make it known what the rules are. Like, the rules themselves, notices, policy statements, what amounts to position papers, etc. This isn't kindergarten and the citizens of the united states aren't children. Blog posts with friendly tone and threats of punishment for ill-behavior simply isn't appropriate.


> citizens of the united states aren't children

You can't put it in these terms and not have a moment to reflect on what's happening in our technology sphere. In particular in regard to CEOs of companies directly involved in these dubious AI claims where customer's life is on the line

I am deeply symapthic of the FTC's position in respect to trying to pass a message where common sense and formal communication have no effect.


It's important to put it in those terms even when there are bad actors, because there always will be. The US government isn't supposed to be our keeper, it's supposed to be of the people, by the people, for the people.

Aside from the problematic medium, claiming something uses AI or is AI or has AI isn't a specific claim with respect to what the product can or cannot do. A product can do X or not. Whether someone punched out 7 million lines of if/else statements or 7 lines of pytorch to approximate the 7 million lines of explicit code matters not. As such, a product making any claim of AI isn't actually a large problem, even if it is total bs. The fact that the current folks at the FTC don't appear to understand this suggests they are trying to regulate something they don't understand.


I'm not sure to understand your point on what the product can actually do: this whole blog entry is about sticking the marketing claims to reality, what exactly the product does, and not make vague promises because it uses AI.

Basically, outside of their tone, you seem to me 100% in agreement with the FTC's message here.


You are right in that the tone/medium of the message is the problematic issue.

But, even outside that, claiming something is/has/uses AI is practically meaningless. It's not a concrete claim, so consumers of said product aren't damaged by such claim. A claim of risk free returns of 50% causes harm. A false claim of curing cancer causes harm. So no, I'm not in agreement with it. The FTC should find more important things to spend the time of their 1100 people on, especially when they don't even seem to understand the topic at hand.


> The first section is a decent summary.

I read the first section and it doesn't seem to support your claim. In particular, it claims they have a mandate to protect consumers and links to another wiki entry on that topic. Within that wiki entry protection against misleading statements is mentioned as an aspect of protection of consumers. This seems to refute your claim of overreach, because in the blog post related to guidance to companies the tone which is consistently struck again and again is that companies should not make misleading statements about AI capabilities. This is complete alignment with their mandate according to the link which you claimed was a decent summary. It also in alignment with the rest of the wiki, which covers topics like false advertising.

> Blog posts making casual threats about AI claims is a long way from appropriate. There are places for formality, and the law is one of them.

The use of sequences of characters in digital spaces for the purposes of communication is not inherently inappropriate. This sequence of characters is on guidance for companies and requests they do not lie or make false claims about capabilities. This relates to their mandate, because they have a mandate to protect consumers from misleading statements. For example, lies about capabilities or false statements about capabilities made unknowingly under the presumption that models were more effective than they are.

> There are serious people quitting the FTC over current leadership, claiming they engaged in behavior that is wildly overstepping their legal authority.

Can you link to one of these people who are so serious they have quit making the claim that the FTC shouldn't provide guidance to companies via their government website at a URL under https://www.ftc.gov/business-guidance for the purpose of discouraging companies from misleading consumers regarding the capabilities of their AI models?


The first point was just pointing to a summary of their purpose, because the comment above had asked about their purpose. It wasn't in support of any claim - to be abundantly clear, the FTC is supposed to be regulating deceptive practice in business.

The second was a statement that an informal blog post threatening citizens is inappropriate, and that the FTC has been accused by by insiders of engaging in inappropriate behavior on several fronts. Two FTC commissioners resigned recently (one making strong claims of illegal and inappropriate behavior in a WSJ oped), and the FTC's recent case against Meta was seen as an overstep by the relevant court.


Thank you! I appreciate the abundance of clarity. It corrected a misconception I had about what you were saying.


A note for people who want to improve gov and specifically in areas like these -- FTC is hiring for their new division meant to work explicitly on tech

https://twitter.com/linakhanftc/status/1630273232535265282?s...


As they also mention, this is not the first time FTC has done this. Here is the earlier AI guidance from 4/2021: https://www.ftc.gov/business-guidance/blog/2021/04/aiming-tr....


All seems pretty reasonable to me.

I hope we can all discuss AI in a more calm and nuanced manner. Leave the sensationalism for other sites!


> Leave the sensationalism for other sites!

Hah!


Do we really need the government to tell us what AI is? Does anybody really know or is the complete final authority on the subject? If The government had told us what the internet's limits were in the 90s it's doubtful they would have been correct and they would have scared away a lot of innovation.


> Do we really need the government to tell us what AI is?

That’s…very much not the focus of this piece.


The FTC unilaterally deciding and punishing for 'bias' is far more terrifying than a biased AI model.


This reeks of anticompetitive regulatory capture after Google dropped the ball on AI-enabled search.


Google already is AI-enabled search.

And it's questionable whether that GPT sidebar feature is even worth it or not given how inaccurate it is.


That is extremely disingenuous. Google search is absolutely not AI-enabled in the same manner that ChatGPT and Phind are, and all of their attempts to develop that sort of functionality have crashed and burned.


Google have been using the BERT language model to drive their search results for several years. It's not the same as chatGPT but it is still what most people would call AI.


One thing they interestingly did not mention is the in recent years very beloved scene of AI ethics.


I would like to know how others are dealing with this. I am making a public health system and to deal with this I simply show results to the clinician or social worker etc... and have them manually approve the result before its applied to an end users protocol.


Your description sounds like you're training your reviewers to rubberstamp the system's results as correct.

That is a horrible outcome.

By the time something comes along they really need to question, they'll already have adapted to the system being pretty good and they won't even notice there was an issue.

This problem is endemic to ML systems and cannot be patched away, so far as I can tell. We need fundamentally different approaches to avoid this pit.

Please make sure decision-makers understand this unavoidable gotcha their ML systems must necessarily have.


Not rubber stamp, they can approve or reject the suggestion.


I didn't mean you were forcing them to approve it.

I meant that, over time, your system will become right more and more often, not by virtue of genuinely understanding the situations, but because it has more training data and comes up with correlations that happen to be correct.

Once it's mostly-kinda-reliable, your users will, over time, stop paying attention to it and just start trusting it, even if they don't mean to, because that's how humans work.

And then it becomes a rubberstamp, and outcomes start getting generated where no human really thought it through.


FTC's language may be a bit rough on the edges, but marketeers' language is quite often deceptive (like when one says: "probably the best in the world": factually correct but very misleading).


  Are you aware of the risks? You need to know about the reasonably
  foreseeable risks and impact of your AI product before putting it
  on the market. If something goes wrong – maybe it fails or yields
  biased results – you can’t just blame a third-party developer of
  the technology. And you can’t say you’re not responsible because
  that technology is a “black box” you can’t understand or didn’t
  know how to test.
I'm reminded of every single AI product that has been released... Good job, FTC.


Good job, FTC

Hopefully it goes after some people.

I once worked for a founder who was running afoul off some FTC regulations. When the team brought it to his attention, he claimed to have never heard of the FTC. When we explained what the FTC is, he started arguing with us saying it had no authority over him, and he'll make the decisions for his business.

Money really does make some people crazy.


Does the money make them crazy or does the system reward the crazy?

Markets can produce some unexpected winners...


And then the FTC came along and fined them... <pinky to mouth> one... thousand dollars!


The FTC requires political headwinds measured in decades to do appreciably good work


Are you sure that wasn't Grimes' ex and the SEC?


Ignorance of your "black box" excuses not


> Ignorance of your "black box" excuses not

Hopefully the FTC understands linear separability. Because I often get the impression that people who want ML models to be explicable don’t, and are expecting the mathematically impossible.


I rather think that they (the regulators) are saying that you can’t expect to use mathematical impossibility as a legal argument to avoid liability if your AI does material harm.


Doesn't this kill AI as a product?

Or at least mean that if you're too poor for a doctor or lawyer you can't fall back to the unreliable AI?


Not if you "keep your claims in check" and have proper oversight of the output your AI generates. Then you will be ready at all times to step in, should something go south. Yes, maybe it does not scale up unlimited with proper oversight in place, but this is exactly what Google and co are missing these days and the reason they extract cash to no end. Proper oversight is completely missing.


Aren't they really only saying you have to disclose it in your claims about the product?


Exactly my read - they're just saying that if you want to pretend you are selling magic that's not allowed.

Quantifying your risk does not mean you eliminated all your risk.

We allow the sale of alcohol, tobacco, and firearms too.


that is correct.


If the system your building is "mathematically impossible" to explain in a way that would be expected of every other system. Maybe you shouldn't be building that system.


Mathematical explanation is not required in many areas.

Perhaps you can’t perfectly explain it but you can at least understand it statistically.

For example is not currently possible to mathematically explain people’s behaviour, but there is statistical evidence and also accountability on an individual. Eg a doctor making a decision about a scan result.


It's even simpler than that. You don't need any particular level of understanding, you just can't lie about what your level of understanding is.

The following would seem to be OK: "This model performs well in a set of tests we devised, your mileage may vary."


I raise another example, to question, you can statistically solve the famous diffusion problem.


Non-linear models often perform better. You’re never going to approach anything like GPT with a linear model.


I don't think mentioning non-linearality means you can bypass the singularities you may cause typically CUSP, buckling etc and omit the analysis intentionally or negeliccantly.


Theoretically you could use a polynomial model which, while not linear, is pretty high on the explainability scale.


From the FTC's perspective, stuff like that is just a technical detail. It is on the business to make sure that the final result can be explained.

If you can't explain the ML part due to whatever math detail, you better make sure you write a good wrapper around it to catch bad output which you can explain.


Lots of businesses deal with high complexity and open ended problems that lead to "inexplicable" risks. They address this by showing diligence in monitoring and mitigating risks, evolving their approach as new concerns are understood. AI companies should employ similar motions (and some do).


Actually, if you ask GPT3 to explain step by step how it arrived to the conclusion, it works pretty well and even has higher accuracy.


GPT is making it up.

It’s cobbling together an explanation from texts that explain how to solve a problem. It’s like a student who copies an answer, then copies the explanation as well.


Chain-of-thought is not the same with direct prompting. Step by step is better.

> Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking.

https://arxiv.org/abs/2201.11903


But there’s no reason to believe that is actually how the result was generated.

The result was generated via a deep neural network, not by whatever explanation or reasoning it prints out when asked.


As someone who don't understand linear separability (at least in the way you're using it), can you explain?


Why bother quoting when it's the entire comment? Never understood that


I'll be elated if the FTC would go after advertisers and ad industry in general. There are almost no ads without exaggerated claims or ad with innuendo and insinuation of function but of course would not deliver on such.



Just knowing that they have this stance makes them seem even more toothless than before… Atleast before this I assumed AI claims were outside of their regulatory domain.


This is an interesting writing style for a .gov webpage...

It gives me hope that someone in the government actually cares about doing their job for the benefit of the public.


"the FTC won't let me be, or let me be me"


I find fascinating the final remark: You don’t need a machine to predict what the FTC might do when those claims are unsupported.


You don’t need a machine to predict what the FTC might do when those claims are unsupported.


This was pure gangster statement from FTC,lmao


This reeks of the government being stupid.

The same thing when they brought the GameStop guy Keith Gill to speak before congress.

The ftc is now the AI marketing police come on give me a break. America, you can’t sue your way to success… your population needs to just get a little smarter on their own. Or maybe you could fund education.


Better yet: keep other people’s claims in check and then you can checkmate them


Josh Browder should pay attention to this.


    This message is not new. Advertisers should take another look at our earlier AI guidance, which focused on fairness and equity but also said, clearly, not to overpromise what your algorithm or AI-based tool can deliver. Whatever it can or can’t do, AI is important, and so are the claims you make about it. You don’t need a machine to predict what the FTC might do when those claims are unsupported.
Really great and ominous closing paragraph.


> You don’t need a machine to predict what the FTC might do when those claims are unsupported.

Indeed you don't, but one would tell you: The error minimizing prediction is that the FTC will absolutely nothing, since that's what they usually do.


I'll believe the words when I see them take action. As the text itself points out, the message is not new, and that just makes it look worse for them.


Not ominous, threatening, and somewhat juvenile IMO.


What are ramifications of false advertising of AI capabilities in medicine? Policing? Mass surveillance? Education? Traffic control? Mass transit? Farming? Shipping? Law? News?

Death. Mass death. Poverty. Disease. Mass destruction. Famine.

Brush away these warnings to move fast and break things if you (Royal you) want. Hopefully, you won’t cause mass suffering, but if you do I hope you’re made an example of.

When the ramifications are great so must be the controls.


I dont disagree, I was expressing that I find their tone surprisingly “yee haw, we’ll getcha’” for an org that wields such power and responsibility.


It's like something a small town sheriff would say in a press conference. The difference is, he'd probably be talking about violent crimes and speaking from the heart. This is a three-letter-agency of unelected law makers threatening who-knows-what over violations of ambiguous statutes they created.


Considering who said it and their track record, I don't need a machine to predict that I might (or might not) be fined 1% of the swimming pool full of solid gold coins that I'll be swimming in thanks to my false claims, to deter others from following the same path.


Nothing juvenile about it.


https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headline...

Are you exaggerating what your AI product can do? no

Are you promising that your AI product does something better than a non-AI product? no

Are you aware of the risks? no

Does the product actually use AI at all? no


This is great, but it needs examples.

I'll give you one: ChatGPT.

Here's what the main page has to say:

> We’ve trained a model called ChatGPT which interacts in a conversational way. The dialogue format makes it possible for ChatGPT to answer followup questions, admit its mistakes, challenge incorrect premises, and reject inappropriate requests.

This is some careful writing. Most important are the weasel words, "possible for", and "in a...way". These make the context of the above paragraph ambiguous enough that it doesn't provide confidence in its own claims.

But is that enough? In court, probably. To the market? It doesn't seem to be. It's clear that most of the conversation about ChatGPT is (ironically) a continuation of the same foundational misconceptions that are expressed (albeit without confidence) here.

What is that misconception exactly? Identity. Here's one last claim that gets straight to the point:

> While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior.

I want to focus on some very enlightening weasel words: "exhibit behavior".

The more you read about what ChatGPT does, and more importantly, how it works, the more you are likely to see those magic words. That's because ChatGPT, from its most exciting results all the way down to its core behavior, doesn't actually behave.

But isn't that its core feature? Aren't carefully trained behaviors the main thing we expect to see in its output? Yes! And we see them! How exciting.

But there's a really important distinction that's really easy to miss: these behaviors are not ChatGPT's behaviors. They come from somewhere else entirely, and ChatGPT delivers them to us, usually at the right place and the right time. This is a case of mistaken identity.

So what, then, does ChatGPT do exactly? It finds semantic continuations. Where? Inside the neutral network that models a suite of example semantics.

It's only a model!

So whose behavior is being exhibited? Language.

Every exciting feature ChatGPT claims to provide is a feature of language itself.

Interacts in a conversational way? Language. Answer follow-up questions? Language. Admit its mistakes? Language. Challenge incorrect premises, and reject inappropriate requests? Language.

It's not just the exciting ones, either! It's the frustrating ones, too!

Writes plausible-sounding but incorrect or nonsensical answers? The model is often excessively verbose and overuses certain phrases?

All valid and common features of language.

And one more limitation that really illustrates the misunderstanding about what ChatGPT does:

> given one phrasing of a question, the model can claim to not know the answer, but given a slight rephrase, can answer correctly.

That's 100% semantically valid language. Language does nothing to stop us from telling lies! When ChatGPT pulls a continuation out of its model, there may be multiple options that are both valid, and contradictory! It might pull the reactionary response, "I don't know" (a very common continuation in real speech to any sentence structured as a question) or it might pull a domain-specific answer that responds to the subjects in that sentence.

It would be really useful if we could just teach it to do the second one, right? But we can't, because ChatGPT never draws a distinction between grammar and subjects. In fact, ChatGPT doesn't even distinguish words or punctuation!

ChatGPT can never understand. There's no place in its behavior to introduce logic. The best it can do is practice.

Instead of teaching ChatGPT, its authors give it carefully filtered data to model, and then put their fingers on the scale to make continuations they prefer more likely to be chosen. In a word: training. That's the best that can be done. Anything more requires a different kind of tool entirely.


> This is some careful writing. Most important are the weasel words,

So what do you want, a full refund? OpenAI is probably pretty busy, so I'll handle that for them and give you your nothing back. Glad to be of service. :P

> It would be really useful if we could just teach it to do the second one, right? But we can't, because ChatGPT never draws a distinction between grammar and subjects.

The structure of the world, as expressed in its training data, is part of it's 'grammar'. It absolute can and does favor giving truthful answers even though a lie is equally grammatical English.

One only need spend a minute looking at HN though to see that "I don't know" is really not part of the language used in most places online, so far they haven't come up with a way of training the model on all the places where someone saw something and decided to say nothing. :)

> In a word: training.

We all also learn about the world through training. The LLMs aren't at all like human minds-- they're fixed depth circuits and can only engage in longer thoughts by 'thinking out loud', as an example. But to say it can't distinguish truth from lies because both are valid language sounds like an error resulting from hearing the title "language model". LLMs model the world through language.


I want people to understand what the thing is, and what it isn't. By people, I especially mean the people involved in making and selling it.

The overwhelming majority of talk I see on the subject misunderstands this very important distinction. Because it is missed, people are taking about ChatGPT personified. They are talking about the version that lives in their hopes and dreams, as if that version is just around the corner. It isn't. It just appears to be. Our hopes and dreams are the content on exhibition. We see them echoed in semantic reconstruction, but what we are looking at is not the work of a new writer: it's the old words of many, successfully repurposed in new semantic space.

It's as if we took all the stories we love, and split them into puzzle pieces, then fit the pieces together in a new order. Because language is so flexible, many of them fit, and even look good together. Language is so good that it pre-sorts subjects, objects, phrases, responses, and even logic. The puzzle-maker doesn't even have to know what they are working with to cut the pieces in a useful way!

> It absolute can and does favor giving truthful answers even though a lie is equally grammatical English.

That depends on the context. In cases that it has the answer, that is mostly true. This covers a lot of cases, especially those that are likely to be tested; because many such cases have "failed" and been resolved. That's literally what testing looks like.

But it's still enough of an open problem that they write about it on the main page.

The very fact that OpenAI has managed to put the sheer quantity and quality of behavior into their model that they have, is indeed impressive. But it is those people who deserve the credit, no matter how much they want to attribute that credit to ChatGPT itself.

My point here is that every "desirable answer" - that ChatGPT provides as a continuation - comes from the already-existing language in its training data. It may be semantically constructed as a result of being pulled from a semantic model, but we need to stop pretending it is symbolically constructed. That work happens entirely in the construction and application of the training data itself. ChatGPT is never doing that part of the job.

> One only need spend a minute looking at HN though to see that "I don't know" is really not part of the language used in most places online

Exactly! Every time any of the "places online" (that end up used as training data) exhibit this behavior, that is an instance of the work being done. The finished product of that very work is freely available. It only needs to be found in one of several semantically correct and semantically related contexts. That's what ChatGPT does, and nothing more.

By virtue of the average given persons' interest in detailed answers to questions, and the average persons' dislike for conversations that end in non-answers, we are already doing the work. Just by participating in the most common internet dialogues, we are creating a wealth of data that captures the same behavior we hope to see in ChatGPT's continuations.

> they're fixed depth circuits and can only engage in longer thoughts by 'thinking out loud', as an example.

That's a great summary of my point. Why are you arguing if you agree with me?

The phrase, "thinking out loud" is an excellent illustration for the power of language itself to capture and encapsulate logical behavior.

We could be spending a lot of time constructing semantically valid nonsense like, "colorless green ideas sleep furiously". But any time you see that sentence, you are likely to see it surrounded in thoughtful context. This sentence is a popular example, not only of the ability for language to contain nonsense, but also for our tendency as language users to avoid doing just that.

A story isn't just a jumble of semantically valid symbols: there is meaning expressed. We don't write a lot of nonsense. We do write a lot of meaning.

That meaning is neatly packaged into the semantics of language. To find it, one only needs to unwrap the semantics themselves.

But that only works when the meaning has been written somewhere. And there is no way to distinguish between two conflicting meanings that live in the same semantic space. Even if neither one is a lie, the semantics must disambiguate, or one of the options has to be preferred as a continuation to its expected context; during the training step.

> LLMs model the world through language.

They don't model the world: people do. They just model the language. Because we have packaged our models of the world (and every interaction with those models) all together into language, we can see language models echo all those behaviors back to us.


[flagged]


You (intentionally?) left out the beginning of that sentence, "In an investigation". Yes, if the FTC decides to investigate you, they can subpoena your code. No, they are not checking everyone's code.


They can't do it willy nilly, if I'm understanding it correctly. Only as part of an ongoing investigation, along with all the legal clauses and restrictions of running an investigation. And this sort of thing is exactly what I'd expect from an FTC investigation.

I do love the use of "look under the hood", which I imagine is a nod towards Tesla and FSD.


if you sell me something that says it does X on the box, but doesn't actually do X then that's all the purview needed. if they have to prove it by getting into the source code, then all the better. assuming this would be some 3rd party auditing company doing it for them. why? there's a difference of being so bug ridden that your software doesn't work vs just doesn't even attempt to do what the box claims.


Yea, the message seems to boil down to "think before you falsely advertise" which is squarely within the FTC's scope of authority.


I think they're referring to the discovery process in the lawsuit that you'll be dealing with.


If claims are made, those claims must be verifiable... Same as eg. for medication, snake oil much?


"Source code" is awfully close to a "recipe", and I'd very much like for actual ingredients to be verified by a trustworthy source if needed.


Do "IF-THEN" statements count as AI? Do rules-based expert systems count?


I imagine we'll be seeing a few cases setting that precedent in the next 10 years, and it's likely that the kind of "aha I said finger, a thumb isn't a finger, so I win" argument favoured by sovereign citizens, 5-year-olds and techbros who think they're smarter than everyone else will go over as well as it usually does.


At the height of the Smart Contract hype I used to bring arguments like these as an example why automating justice is not something you want to do.

Generation after Generation of fresh "sovereign citizens, 5-year olds and techbros" is surprised that judges are in fact allowed to use life experience and common sense before making a decision.

"Not guilty because of a technicality" is usually reserved for TV.


The entire post was about keeping AI claims in check. Whether you market your IF-THEN statements or rules-based expert systems as AIs is what matters.


But if those things are considered AI then why would it matters if you market them as AI?


Why do you consider them to be AI? Random IF-THEN statements are not AI, or any program fragment would be AI. False advertisement is under the FTC's purvue. [1]

  When the Federal Trade Commission finds a case of fraud
  perpetrated on consumers, the agency files actions in
  federal district court for immediate and permanent orders
  to stop scams; prevent fraudsters from perpetrating scams
  in the future; freeze their assets; and get compensation
  for victims.When consumers see or hear an advertisement,
  whether it’s on the Internet, radio or television, or
  anywhere else, federal law says that ad must be truthful,
  not misleading, and, when appropriate, backed by scientific
  evidence.
[1] https://www.ftc.gov/news-events/topics/truth-advertising


Not sure why you're quoting the false advertisement text; it doesn't pertain to the argument of what is considered to be AI.

At least rules-based expert systems have always been a part of artificial intelligence. They shouldn't be called machine learning, though.

if/then logic is really just an implementation detail of rules-based expert systems. I think these rules-based systems can be differentiated from normal code in that there is a lot of compact domain knowledge encoded for classifications or decisions.


So what's the FTC's definition of AI then?


The same as the advertiser's definition.


It’s been acceptable forever. This is how a lawsuit works.


I hate the idea of government. It always gets too big.

This is awesome.


Oh man... And this is a government agency. Such proactive and articulate. No wonder US is where it is for it's wonderful state apparatus. It certainly has its fair share of failures but this is at a whole another level.


I'm not sure I follow, I thought the article was quite articulate. Should a government agency not be proactive and articulate?


That's what I am saying exactly that it is very articulate and pro consumer. And that US is where it is because of its state apparatus. There are surely some failures here and there but it functions nevertheless.

How do you read it differently?


I think the "Oh man..." and "No wonder" bits read sarcastically at first glance.


Yeah, my apologies, I read the post as dripping with sarcasm. My mistake!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: