Hacker News new | past | comments | ask | show | jobs | submit login
Sam Altman Says OpenAI Will Leave the EU If There's Any Real AI Regulation (gizmodo.com)
87 points by rntn 4 days ago | hide | past | favorite | 200 comments

> If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.

Isn't that what any normal company does? Comply until it's impossible to operate legally? They can choose between ceasing to operate or getting arrested.

If the AI regulations are reasonable and AI companies can't operate, the technology just isn't ready for wide-spread adoption yet. See also: Tesla's Autopilot.

I have to wonder what the investors over at Microsoft are thinking right now. I doubt they'll be happy with threats of leaving lucrative markets from a company so deeply integrated into their product suite.

> If the AI regulations are reasonable and AI companies can't operate, the technology just isn't ready for wide-spread adoption yet.

Conversely if the AI system is reasonable, but the regulations are making them illegal, then the blame is on the regulations.

There is a trade-off between potential benefits and potential harm, and it's not immediately clear to me, who is in the better position to find it: the AI companies or the legislators. The first are biased because they want to sell their products, but the second have worse understanding of the technology and are IMO on average too over-protective.

> Conversely if the AI system is reasonable, but the regulations are making them illegal, then the blame is on the regulations.

have you read the EU's proposed approach to regulating AI?


it seems to be an extremely reasonable approach to me, if anything a little too lax

the problem for OpenAI is its products can't easily be segregated into the different risk buckets...

and they want ChatGPT wired into the high risk areas as that's where the profit is

I think the proposal is reasonable provided ChatGPT is classified as moderate risk.

One thing I don't quite understand is how they are going to ensure that AI-generated text is always marked as such. When the text is generated in the chatbot, the fact that it is AI-generated is clear, but the user is free to copy-paste it anywhere. There is no good way to watermark plaintext.

Individuals will always be able to do this, but it might be enough if it's illegal and a bit of extra effort. You don't need to eliminate 100% of AI-generated spam, just make it risky and unprofitable enough that it's not a common practice, and have people call out those who seem to be doing it. I'm not sure it's possible, but it seems worth trying.

And if someone is operating a troll farm in another country, this won't help.

The regulation is meant to protect the EU citizens which couldn't care less about the financial incentives of a US company or their hype du jour. If hypothetically no AI can be created with respecting the citizens, what should the choice be, block or accept Skynet? (I said "hypothetically")

I am curious, do you think SkyNet, if it's really "SkyNet", can be blocked within the confines of the country in which it appears first?

And if it's not SkyNet but just an awesome tool like ChatGPT is today, do you think regulators will manage to take into consideration the advantages lost making this tool illegal while protecting us from a potential SkyNet?

Suppose a new generation of AIs results in a big economy boost. Then a country might be put in a position to choose between some risks to privacy etc and a risk of being left behind.

A similar situation would be a country that in 1900 decides whether or not to outlaw cars.

Nobody outlawed cars.

In the UK we did mandate that a pedestrian must walk in front of the car with a red flag to warn of the approaching vehicle though for a while!


There were preciously little regulations in the 1900s though. Don't you think if cars were invented today they would face a whole lot of regulations?

Cars do face a ton of regulations today. You can’t sell a car if it doesn’t meet or exceed hundreds of regulations.

1900s also had child labor and black lung. Not a standard to aspire to.

> If hypothetically no AI can be created with respecting the citizens

The problem is you're assuming the people writing the regulations are perfect. It's not that bad AI should exist, it's that we're probably at least a decade away from regulators even having a hope of starting to understand what's going on.

Why should I assume anything? We are what we are and we do what we can with what we have - imperfect as it is. Inactivity is way worse. By the way understanding all things (AI or whatever) is definitely not a requirement for the legislators. I'm sure you remember they have hordes of specialists in their staff and those will understand the aspects at hand to sketch recommendations.

It's not legislators, although I disagree with your assessment there. The people actually doing assessments and inspections of companies do not have much expertise at all. Even for far simpler technical tasks than assessing AI.

As an EU citizen I think it's about joining SkyNet or falling behind in productivity. I prefer joining SkyNet.

If, hypothetically, a SkyNet like AI actually existed, how much damage to humanity would you be ok with as long as productivity and quality of life went up went up for those who were spared? Let's assume that you and your loved ones are among those who have been guaranteed safety by this AI.

I am sorry if the tone comes across as snarky. Not my intention. Genuinely curious.

My comment was about if it's starting to exist, EU regulatation won't stop its power and influence.

If everyone uses ChatSkyNet for programming (or even ChatGPT-4), but EU is not allowed, EU will simply lose its competitiveness in creating products.

Sadly, the EU has already lost that competitiveness through many other similar regulatory situations. The real risk is not competitiveness in creating products, that's a lost battle already :( The real risk is losing competitiveness in even being able to work for the American companies that are creating products.

> My comment was about if it's starting to exist, EU regulatation won't stop its power and influence.

it... might?

it possibly existing in the future is not a good reason to lay down and just surrender control of world governance to ChatGPT today

So it's subjective right? The balance.

Since a company is usually harmed by increased regulation their opinion is biased.

But there's no oppposite to this. Governments don't benefit if companies leave and in fact they lose out. This also includes the population (jobs etc) that means the opinion of the government isn't biased.

> Since a company is usually harmed by increased regulation their opinion is biased.

I know this is the prevailing point of view in business, but is it actually true? How many bank failures would have been avoided if US banking regulations hadn't been slackened so much?

> How many bank failures would have been avoided if US banking regulations hadn't been slackened so much?

Most of the failures after 2000 would have been prevented. All of the 2008 financial crisis was caused by removing regulation.

Prior to Commodity Futures Modernization Act of 2000 [0], several states ruled that credit default swaps were insurance and thus regulated CDS as insurance. Some states ruled that CDS were gambling and regulated it as gambling. The CFMA basically said "credit default swaps are now a federal issue and it will be regulated at the federal level" and set up a poorly funded agency that had no teeth until after the bailouts in 2008.

During the 1920s, many banks got involved in both commercial banking (taking deposits and making loans) as well as investment banking (arranging IPOs). In 1933, the Glass-Steagal Act [1] said that banks may do one or the other, but cannot do both. In 1999, Glass-Steagal was later repealed by GLBA [2] as CitiCorp was doing both (due to mergers) while daring the regulators to do anything about it.

0 - https://en.wikipedia.org/wiki/Commodity_Futures_Modernizatio...

1 - https://en.wikipedia.org/wiki/1933_Banking_Act

2 - https://en.wikipedia.org/wiki/Gramm%E2%80%93Leach%E2%80%93Bl...

When the government covers your loses and even let's you take bonuses for fucking up and causing a recission why wouldn't you think regulations slows you down.

how many guns would get sold if the government didn't created laws that protects the industry from liability.

or section 230 which shields websites from the rent seeking of the cable companies but allows them to rent seek advertisers.

Common sense is what protects gun manufacturers from liability. If you run someone over with your car, Ford is not liable either.

Regulatory moats can be pretty powerful - not sure regulations always harm all companies.

By proclaiming they'd leave if they can't comply, it might cause the EU regulators to "self-censor" the regulation such that it's "easy to comply".

but tbh, i dont see what the regulations would be. I am a fan of no regulation unless actual harm has been shown first.

> I am a fan of no regulation unless actual harm has been shown first.

I used to be, but some of the modern crop of CEOs has taken the wind out of that. They have done tremendous damage, very quickly, and the regulation has been unable to keep up, until severe damage has already been done.

Once the bullet has left the gun, it can't be stuffed back in.

Can you be more specific, with examples? Who did what?

Asbestos. Phthalates. Fen-phen. Chloroflorocarbons. Leaded gasoline. All were legal because no harm had been shown... yet.

"Modern crop of CEOs"

All of those examples are primarily damage done pre 80s

Phthalates? Aren't those currently an issue, right now, in water? Aren't they currently being used in plastic? Or did I get the name wrong?

The name is right but the claims of clear badness are suspect.


Was the potential harm well known and accepted before showing it existed? How?!

You need an example of a bad thing happening because regulations didn't exist?

I can be, but, really, why should I?

It would take about twenty seconds with Google, to come up with hundreds of examples. Some, better than others.

They may seem obvious to you, but to a lot of us, it's not obvious at all. Vague notions of "damage" could refer to anything, like harming competition. Different value systems and all that.

Well, the original post was "Stop me if you can."

Basically, if there's no rule saying I can't do it, then I should do it, until rules are developed to stop it; regardless of the morality or danger.

I suspect that one way to make people more careful, is to make future rules and regulations retroactive.

For example, if I had developed a fintech product that was a legal ponzi scheme, or a downloader that is a legal way of accessing copyrighted material, because of loopholes, and those loopholes are closed, after the fact (happens all the time), then, if the new remedies can be applied to the "legal" implementations that were in effect before the closing of the loopholes, I suspect that it would make the "loopholer" a wee bit more cautious.

But that won't happen. There's way too many ways that it could go off the rails.

So people will keep on bending, folding, spindling, and mutilating, until they are stopped by a steel door, slamming down in front of them.

I strongly suspect that a lot of companies are set up to harvest as much as possible, before the door slams, so they can reap the benefits.

> if the new remedies can be applied to the "legal" implementations that were in effect before the closing of the loopholes, I suspect that it would make the "loopholer" a wee bit more cautious.

and there's moral implications about applying new rules retroactively. What you do today that is completely legal, should not be punished tomorrow even if it was decided in the future that it is illegal. Otherwise, there would be a chilling effect on innovation.

Exactly. I could see a government, in need of tax revenue, enacting a law that taxes some activity, then going after people that have been doing it for years.

Suddenly, we owe ten years of back taxes, with penalties.

It’s not feasible to do this.

But there are plenty if people –many on this very forum, I suspect– that know what they are doing is highly immoral, and will likely get shut down, but … MONEY … so they do it for as long as they can gt away with it.

User privacy, cambridge analytica, facebook, and the impact those together had in several elections, most especially Trump 2016 and Brexit.

Cambridge Analytica had basically no effect on any election. People who actually understand online advertising consider it a joke.

The "organic" social stuff that helped Trump 2016 (leaked Democrat emails, etc) had nothing to do with privacy or ads.

Cambridge Analytica's sold their services literally as "election management agency" and promised to be able to influence a large number of undecided voters.

More than 87M people were profiled, and more than 3M people were targeted by Cambridge Analytica in 11 key states during the election to feed them the content that would swing their indecision towards Trump based on their psychological profile and preferences.

Trump paid for that, and Michigan and Pennsylvania and New Hampshire were all very close.

To boldly state that CA had no impact on US elections, Brexit and I don't remember which South American election when the people involved in it literally sold it and used previous results as evidence seems a bit of a stretch.

I might be wrong, but there's no clear way of knowing unless votes were not anonymous.

Imagine that someone says Trump hacked the voting computers by writing code on a piece of paper and slotting it into the machines.

The CA thing is the same level of stupid. The targeting would just be senselessly bad based on CA's methodology.

What was the actual, documented, damage on the "user privacy" issue though? The only effect I know is targeted ads which seems like a net benefit to me.

Uber and Airbnb?

Uber brought ride sharing and cracked the taxi market wide open - perhaps to the benefit of riders.

Airbnb is also a boon for travellers looking to stay cheap, and a boon for property owners to cash in on underutilized real estate.

The "harm" that has been cited to have been caused by them are only the incumbents that do not move with the market or times.

You don't live in a European city near the sea. Airbnb is a plague. We used to use a house sharing in summer, we did it for years, until Airbnb killed it. It was an exchange 1 to 1 in term of population, we now do x5 to x10 in summer.

Now that the harm is done (speculation on housing priced out teachers and low income workers, old, niche stores turned into tourist traps), we finally have regulation. A bit too late. But at least the mayor is adding 20k housing units (+20%), driving the price down and making speculators eat a fat loss.

Uber was regulated much faster so it's impact have mostly been positive, unless you've been raped by a predator as they used to not check their drivers.

You have to regulate ahead of time, even if it's soft and toothless in the beginning. Check the EU proposal to regulate AI, it seems really, really soft.

It's like a necker cube.

Glance at it and she looks like a renter or typical youth of a city with tourism but poor purchasing power/ social mobility, but stare a bit longer and relax your focus and she becomes an incumbent who doesn't move with the market or times

> the regulation has been unable to keep up

it's because the gov't is not incentivized properly. I think they should be pressured to keep up. This isn't an argument to "pre-regulate", because incompetence at the gov't can be fixed!

> Once the bullet has left the gun, it can't be stuffed back in.

But once many people start shooting guns willy-nilly, you start regulating guns. But not before.

    … no regulation unless actual harm has been shown first.

It is the duty of politics to protect the citizens and environment - also proactively. From reckless CEOs, greedy shareholders and lawyers which claim that it wasn’t strictly forbidden to cause harm.

Politics isn’t doing always right. The Cookie-Directive tried to make misuse of Cookies illegal. Actually it made web browsing horrible and created further questionable businesses like “cookie cloud storage providers aggregating cookies of several websites”.

On the other hand this people require ABS and ESP in cars. Banned Teslas “Autopilot” and had smart ideas like type certification for aircraft’s. And then they forbid stuff which can cause cancer…that was smart! What comes next? Acting against monopolies and oligopolies like Microsoft and Apple?

(Harm has already been shown. Its more nuanced than this.)

Also proactively, indeed, but usually rather reactive. Why bother with say cops on the street to check for weapons when civilians tend to not carry them there? Proactive is rather: seemingly proactive, like AI is seemingly intelligent. In reality 101, its reactive, but the quality of the reactivity matters (speed, thoroughness, accuracy, transparency, cost, etc). Some of these qualitative measures are at odds with each other.

> And then they forbid stuff which can cause cancer…that was smart!

Tobacco is regulated; not forbidden.

Regulation without known harm is a case of we need to do something and this is something. The rules will be arbitrary and expensive to comply with and will most likely have no preventative effect on any negative effects.

vaccines are like the medicine analogy to regulation.

You have to first prove that it is not going to do more harm than good. And without knowing what sort of harm AI's do - it's all theoretical at the moment - any regulation is merely speculating a harm, and the cost of prevention might be high (or stifling).

ABS and ESP in cars came _after_ harm was shown. And so is emission controls, and crash-related safety measures and testing/certification.

> And then they forbid stuff which can cause cancer…that was smart!

which makes sense, as those stuff has been shown to cause cancer.

In other words, harm has to be shown, rather than speculated, before regulation is desirable.

We know certain possible high risk applications already (e.g., dosage recommendation for medication), so it is not all about figuring if something is risky or can cause harm.

The question then is more: regulate those within the particular area of use or have something that generally aims to regulate risky use.

How are you going to *proove* in advance that a certain economic or social measure is doing what you expect?

This is a normal negotiation tactic on the part of OpenAI. Businesses often say they will leave a jurisdiction or be forced to shut down if X regulation is put in place.

It's a tactic of all businesses. In the US they play states against each other to get higher and higher tax breaks for moving.

It's not a tactic. It's just reality. If all the expense of coping with regulation is too much, then it's not worth doing. It seems almost too facile for Sam Altmann to have said, but perhaps it's worth reminding people that regulatory compliance is an expensive business in and of itself, even if your product already fully complies.

The name for it is Boulwarism [0]. It is a "take it or leave it" attitude. You see it every day when presented with an "I agree" checkbox/button.

0 - https://en.wikipedia.org/wiki/Boulwarism

I don't see the difference from normal pricing, it's take it or leave it in nearly all retail contexts nowadays.

Businesses are not animals, it's immoral

This is a nonsensical statement. All markets, and thus all goods sold on a market are regulated. You can’t have markets without regulations.

This is just thinking about the wrong level of regulation. Not all goods need to have CE marking, or get audited regularly, or anything like that. I can go to someone who makes furniture in their shop and buy that furniture without them ever contacting the British Standards Institute.

You can't buy it if you can't agree about what money is. That's one of those pesky regulations.

The regulations forbid competing currencies (sorta), but people create currencies for trade even when no government is doing so.

As I say, the wrong level of regulation.

You're not making a point on the argument being made.

It's not about _not_ having regulations. It's about not introducing new regulations without demonstrating harm first.

The current crop of regulations, i argue, is sufficient, and no new ones introduced for AI.

Why wait until harm occurs to regulate?

Why pay a cost to regulate, if it may not ever be needed?

Assuming there's reasonable probability that something could go wrong you don't wait. It's better to be proactive instead of reactive

As quoted in the article:

> He called for some regulation “between the traditional European approach and the traditional U.S. approach,” whatever that means.

And you phrase it:

> I am a fan of no regulation unless actual harm has been shown first.

This is the "traditional US approach". As has been shown far too often, by the time actual harm has been shown, it is far too late. Especially as large industries like to use their economic clout to pervert things with propaganda like "oh, the science isn't settled yet".

The "traditional US approach" is why cigarettes were sold for 100 years after they had been shown to cause cancer along with other major medical issues. And why cigarettes are still sold.

And another HN story from today discusses this exact issue: https://news.ycombinator.com/item?id=36067401

The European approach is "show us that it is safe before you put this stuff on the market". They use the 'Precautionary Principle'.

> The precautionary principle is the idea that when there is the chance of negative consequences from an industrial practice, that the burden of proof lies with the inventors/implementers of the process to prove that there are no negative (ecological and environmental) consequences of it. The principle is often cited by "technological conservatives" and/or environmental activists when there is a perceived lack of evidence showing that a technology is absolutely safe.



Is AI "safe enough" to be sold/marketed to people? Many people say it isn't. The people who tend to get into executive positions at companies have a visible lack of concern about anything other than their personal wealth. So while AI might be "safe enough", the people who tend to get to the top will make even the safest things unsafe.

And yet, the empirical results is that the US is now wealthier than the EU (at least, in totality).

And not to mention that such regulations (on AI) might not be adhered to by countries such as China, which would make it all but moot - except it causes harm to western companies looking to profit.

A cautious approach is valid only if you have unlimited budget and you are not in a natural darwinian selection process of elimination.

The title of the article is undoubtedly a lie.

"any real regulation" does not match in any way what was said.

Yeah, sounds like a reasonable take (the title makes it a bit more... inflammatory?).

Of course all business should comply until they can't, at which point it's really not their problem anymore. I think it's perfectly fine to say "hey, we did our best", but if it still isn't enough, to simply not play the game.

This is an article by Gizmodo meant to generate clicks.

Unfortunately, HN has fallen for the tricks.

> If the AI regulations are reasonable and AI companies can't operate, the technology just isn't ready for wide-spread adoption yet. See also: Tesla's Autopilot.

It couldn’t be that the EU regulators are clueless could it?

And you don’t see a difference between the harm that ChatGPT could do compared to the harm of an AI controlling a self driving car?

Hmm, for a moment I thought there must be an error in the title and it should be "Sam Altman Says OpenAI Will Leave the EU If There's ISN'T Any Real AI Regulation"

But no, He is in front of US congress arguing for regulation, he is in the EU arguing against regulation.

It's pretty obvious he's after whatever (de)regulation supports his business model. But I guess that's just his job.

Let's hope regulators don't forget it is their job to represent the people's interests... I mean, whatever those may be in this case, I'm not claiming it's easy, I'm just claiming that Sam may not be the person to have the people's interests at the top of his list.

The real take away is that he’s out front shaping regulation before it impacts him.

In the US he knows it’s easy to manipulate our policy with a little song and dance because our policy makers don’t understand what they are looking at.

It’s a different story in the EU where you have competent leaders.

That seems to be the reason for the inverse approach.

US: “let me help you write the regulations ;)”

EU: “you’re gonna be behind it you regulate me :0”

As a EU citizen, I really wish we had competent leaders, particularly in Brussels. Have never seen any evidence that is the case, specially with regards to tech.

The last EU initiative at sweeping regulation in the tech space (GDPR) was disastrous. It imposed huge compliance costs for all entities, from large multinationals to small startups and business and even some individuals and nonprofits. For the multinationals that is totally fine with me, they can afford it, but for the other ones it is far from clear that the benefits outweigh the costs. They could have just made it not apply to small entities (either in revenue, number of users or some other metric) and it would have been arguably great. The way they did it they just gave a big advantage to large incumbents and put a strong handicap on EU based tech start ups (as well as other businesses to a lesser extent).

Years after the roll out, I still occasionally come across international websites that have opted to ban European visitors than figure out how to comply with GDPR.

Don't get me wrong, the GDPR was meant to address real on going abuses of personal data, and did some things right, but it could have easily had most of the benefits for a fraction of the cost if they did things right. Unfortunately, like all almost all regulators, Brussels tends to pay a lot more attention to hypothetical benefits of regulation than its predictable costs.

I agree, there are some stupid bureaucratic loop holes, i.e., Find a public data set without consents which you want to use. This is not allowed, but this is what you can do:

* Try to contact owner of data to ask for consent of all people (maybe thousands)

* They ignore you because it's an insane task to arrange all those consents...but...

* Now you have shown that you did a reasonable effort to obtain consent and...

* Now you can use the data for research purposes.

You can't use it for making/creating anything related to IP though. But you can get creative of course.

I wouldn’t be surprised if Sam Altman calling for regulation were done for two reasons.

1. Make it harder for smaller LLM based companies to break into the space

2. Drum up interest in OpenAI by making it seem that they had or were close to having an ai worth regulating

1000% this.

He's looking for regulation in the US that will ultimately protect Open AI because he believes he can get it.

He's opposing regulation in the EU because he knows he can't control it and it will almost certainly actually restrict him.

It's so plainly obvious and obviously cynical, it's just gross. Especially when his calls for regulation in the US are supposedly based on trying to look out for humanity on a grand scale.

I wrote my Senators after watching the recent subcommittee meeting. I think it was pretty transparent to everyone that it was largely optics, but privately reinforcing that OpenAI isn't and shouldn't be the sole spokesperson for all things AGI is still important. We're all already using AGI, but we're not using OpenAI's AGI, and we don't need their pre- and proscriptive alignment.

It's starting a discussion, and more tacitly, a negotiation. We need to weigh in to the right people proactively if we want to be represented and start defining intelligent regulation.

It's the business jackass tactic, enter a market and then use the government puppets to raise the barriers to entry.

I've seen a few times. It's one of the steps to create a government assisted monopoly

My guess:

3) Get ahead of the attack on AI before the attack begins.

Once LLMs start to make certain jobs obsolete, I predict one of the major US political parties will take a position against AI (as a job destroyer similar to outsourcing) and the other party will take the opposite side. No idea which party will take which side yet.

Once it becomes an issue of this magnitude, it’s better optics to have went before the senate “asking” for regulation before they come to you demanding regulation.

So, he wants regulation but only if it benefits him:


why else would anyone want it?

Anyone with an ounce of consideration for others will want regulation, on any sbuject. Otherwise, we'd be driving around doing 150 in cities.

Is it up to car manufacturers to lobby legislators to regulate cars' top speed though?

speed limits and seat belts didn't come from the auto industry, it came from people lobbying for regulation that benefited them

the professed aim of the company is to protect humanity by building benevolent AGI

I suppose now it's now obvious to everyone that that's a lie

For starters to, say, protect privacy and intellectual property? Both of those areas are kind of under served in the current ai models landscape

you think Sam Altman wants to protect anyone else's IP other than his own? he's practically helped create an IP laundering machine

Thats exactly my point.

Its clear why sam altman wants it, the comment was about why people should have fair, just regulation— not the ones pushed by corporate agenda.


lol, very true

but a 'good' reason for legislation is if it benefits everyone, including you.

Well, fear, tho I guess it's the mirror of "benefits"

To benefit everyone

This is where the rubber hits the road. In the US you can walk around and sagely tell everyone how concerned you are and how you'd support thoughtful regulation safe in the knowledge that a few million on good lobbyists will completely derail any attempt at regulation. In the EU regulation will actually happen and the US CEOs still really haven't figured out a way of guiding this how they like. Threats like "If you do that we'll leave" just don't work on EU regulators, and US tech is a fantastically easy target for EU regulators.

I know that there are many people at OpenAI who worry about the risks posed by AI and support real regulation to mitigate these risks. That said, given that Sam Altman’s position on climate change is something along the lines of “we shouldn’t reduce emissions now because we can develop tech to fix whatever we’ve done later” (e.g. https://twitter.com/sama/status/1445059564114563080), I’m skeptical that he personally sees AI regulation as anything other than a means to regulatory capture.

Paulg complaining about the EU regulations while ignoring Sams congress hearing is just a crazy contrast and hypocricy to me.

I like paulg, but in one of his essays he complained about jocks stealing the Nerds' lunch money and that software engineering was a way to avoid the jocks.

Now, his protege is attempting to steal the Nerds' lunch money with CoPilot.

I've always felt like AI is the ultimate nerd revenge. I still kind of do. Except I like this framing, the nerds are trying to take each others lunch money now too :)

What do Sam or pg have to do with CoPilot?

Copilot is powered by OpenAI Codex

Thank you; TIL!

I assumed without checking that Microsoft would have their own competitive offering in the space.

So to be sure of the basics: Altman goes before the U.S. congress to promote AI regulation, and than at the same time makes arguments that imply not wanting regulation for the EU market? Cute little double standard...

Not in the least related to him selectively changing postures because he worries about a real threat of upstart competitors forming in the highly dynamic U.S. market, while also worrying about a loss of manipulable customers in the much more passive and non-competitive EU market.. Couldn't be something like that.

Probably also because he doesn't expect any strong regulation to come from congress regardless of how much he talks about it.

Feels like a first step in a long dance performance. Good that EU regulators are already looking at this space (given that nobody else does).

The main risk for them is that the actual risks from use and abuse of AI in various domains will only start showing up later. So early regulatory work might appear silly or misplaced. But that is not a reason to adopt a "wait-and-see" attitude.

There is so much recent precedence in the software domain: the privacy disaster from targeted adtech and the speculative manias of crypto. Once large numbers of users are lulled into certain behaviors under false pretenses it becomes much harder to reverse the social damage.

While privacy is a connected risk here too, there are much wider possibilities for abuse once algorithms are on the loose, processing and influencing without any transparency or accountability. We know that people will try anything that is not explicitly illegal if it gives them financial or political leverage. Gray areas of missing regulation are an open invitation for "creative" thinking and regulatory arbitrage.

The noise "not to suppress innovation" will be enormous. But the concepts and tools for sandboxing and keeping a close eye on developments are there. The idea that the only way to innovate is to move fast and break things is just a transparently self-serving dictum that should be put to rest.

They should keep an eye for oligopolies too. There is no reason to endure more decades of gratuitous "winner-takes-all" dynamics. The technologies are exciting, there are clearly positive scenarios. Let us hope we are smart enough to properly regulate smart algorithms.

Ah, so the regulator has become the regulatee.

Quis custodiet ipsos custodes?

Pathetic mixed messaging from a hypocritical CEO.

Releases the most powerful LLM the world has seen pretty much publicly and next to free.

However now that he has the lead wants everyone to be regulated and now allowed to open source it.

Tells the US gov, where his competition is mostly likely to come from that they need regulation.

Tells the EU, where the actual enforcement of privacy and legislation happens via billion dollars fines, that he doesn’t want regulation.

Comical display.

I don't know how AI regulation will work, but wouldn't it be feasible to restrict regulation to usage in specific domains? Like how you can regulate software used in finance, health care and others but you can't just restrict "software" in the general case. You can't prevent me from making a good health care suite, it's just that I might not be able to operate it for profit and thus not expose the general public to it - which I find sensible.

I would find it amazingly depressing if capable AI like GPT4(+) in general was blocked to the public, like it were heavy assault vehicles or nuclear devices.

Just block specific instances of it. Just block WhateverShadyStartupDecidingIfYouGetAJob using GPT for example, I'm fine with that. If I can replicate their work at home, leave me be. Regarding this tech as if it were explosive devices is way too heavy handed for my taste.

One day he says AI is frightening and must be regulated, one other day he says I don't want AI regulations...

"It must be regulated according to my ethics" isn't a new stance in the tech world.

It's also possible that any calls for reasonable regulation were just pandering towards investors. Key FTX people also called for regulation of the crypto market, after all. It projects a willingness to be a good and compliant company, even if they'd wish they themselves wouldn't have to comply.

Exactly this. I am pretty sure he's bluffing, and even if not, by the time they decide to pull their operations, many alternative would be on the market (there are already many).

If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.

He didn't say he doesn't want them. He said AIs, for a lot of reasons, may not be able to operate within the confines of potential regulations. For a simple example, they may be trained on work that was not properly licensed in the EU. That's only one compliance challenge in one dimension, but there are a large number of dimensions in which AIs might fall outside the mandated parameters.

Obviously people will try, but there are reality based technical limitations.

It’s as if he wants AI regulations that only benefit OpenAI.

Altman is the new BF

Good, space for better gardening of the technology.

This reminds me strongly of the famous cartoon "A brief history of corporate whining" [1]. US companies said the same when the EU enacted the RoHS regulations, they said the same when the EU passed the GDPR, and it will be just the same with AI regulations: the European market is just too massive to ignore it.

[1] https://leftycartoons.com/2009/09/04/a-brief-history-of-corp...

Every thread about GDPR have people (I assume from US, especially given it being HN) whining how "hard" it is to implement (just don't store user data by default, how hard is that?) and how "overbearing" it is.

Then big surprise when another news about US spying on its citizens harder than people thought drops...

If the law is that simple, then why did it take 11 chapters and 99 sections to say that?

Because European laws tend to be very precise and detailed, so that people can rely on them instead of a situation in the US where the effective boundaries of the law are only created in the subsequent years by the courts.

It is easier to read the law and the cover text explaining the reasoning behind the regulations than to read a vague US/UK style law and then having to research court rulings, looking which of these conflict with each other or if they're even applicable (as I recently learned here, y'all's federal courts are organized in circuits and a ruling of a federal court is only binding for that circuit).

And yet small businesses in the EU are still getting caught up in it.

Of course, because vast variety of businesses had no security whatsoever and a bunch of PII could just be accessed by anyone on file share.

Vast majority of cases are "we haven't even tried in the first place"

And majority of fines are in range of 4 to low 5 digit.

At least from what I observed the "mistakes" are generally fined pretty low if organization can prove it took necessary steps in goodwill, but stuff like "not telling users about the breach" is treated VERY harshly

One of the small businesses in question wanted to use Shopify and got fined.

> One of the small businesses in question wanted to use Shopify and got fined.

Citation needed. Who? This example seems very fictional, as shopify operates in the EU - there is no fine for simply "use shopify". "handle the sensitive financial data". is literally what such services do for their clients.

I see.

It hasn't been big news since then, so in the six months since then, did shopify manage to conduct their business legally, or did the business go to one of their several competitors who cares more about operating legally in the EU?

IDK, is this related? https://www.reuters.com/business/retail-consumer/eu-says-sho... https://ec.europa.eu/commission/presscorner/detail/en/ip_22_...

"Company complies with consumer safety law" is hard to spin as a bad thing, but go on, what have you got in that regard?

The Shopify announcement was regarding fake goods. It was not regarding the GDPR

> complaints mainly related to web stores hosted by the platform, found to have engaged in illegal practices, such as making fake offers and fake scarcity claims, supplying counterfeit goods or not providing their contact details.

The GDPR basically makes it illegal for an EU online business to use any SaaS service in the US or in this case Canada.

It was never meant to “protect” anyone. The entire purpose was to make it easier for EU companies to compete.

> The GDPR basically makes it illegal for an EU online business to use any SaaS service in the US or in this case Canada.

No, only if they cannot comply with the law.

> It was never meant to “protect” anyone.

That is the opposite of true. I've seen it help internally at a data storage level, and seen people make use of it to protect themselves. You do not know what you are talking about.

> No, only if they cannot comply with the law.

So exactly how is an EU citizen suppose to use a service like Shopify - that has to store PII - and that’s not based in the EU?

> That is the opposite of true. I've seen it help internally at a data storage level, and seen people make use of it to protect themselves. You do not know what you are talking about.

And it also killed a small business who wanted to use Shopify to sell completely legal things.

Not to mention that the same EU that is being applauded for “protecting your privacy” is trying to get a law passed so they can have a backdoor to any W2s encrypted methods.

It’s not that they want to protect your privacy. They just want to be the only ones who can surveil you.

Then use provider that respects privacy. You guys would give up your rights for a pack of chips..

It’s funny for someone to claim that we would give up our rights when you are begging the government to limit your choices instead of using your own free will to choose based on your priorities.

The EU is constantly taking away your right to choose which companies you want to use and it’s being celebrated

> you are begging the government to limit your choices instead of using your own free will to choose based on your priorities

This is the "pro-crack-den" argument. Boring, boilerplate libertarian dogma. Not suitable for use in the real world.

So you’re not capable of choosing which websites you go to?

This is a non-sequitur response.

How so? Why do you need the government to tell you which products you can use? If you don’t like the policies of a particular company, don’t use their products.

> If you don’t like the policies of a particular company, don’t use their products.

So you're saying that the crack houses should be legal and open to all. Cool.

As opposed to the War on Drugs that just focused on minorities and the fact that cocaine has a lot lower penalties than the same amount of crack because of who uses it?

Or do you mean that when the drug epidemic hit “rural America” politicians started treating it “as a disease” instead of a moral failure when it was all about the “inner city” drug problem?

But either way, are you comparing going to a web page that stores data in the US that you can choose not to go to going to a crack house?

If you don't store user data none of those applies.

If you do want to store the data, those just specify what and how with enough detail that battalion of Facebook lawyers can't go around it. It's mostly sensible when you get around to implementing it.

For example if you store data for say CCTV purpose, you have to tell people they are being monitored, for what reason, who administers the data etc.

Article 6 covers filming the public spaces i.e. they don't have to give written consent because the extra security of security cameras provide considered greater good than individual.

But at same time it allows processing only for those purposes. You can't just go on facebook and post some video from security camera of someone doing dumb shit. You can't just film inside toilet. You can go and monetize it in any way etc.

You also need to do the rest of due diligence of making it secure, not sharing with 3rd parties unrelated to the goal etc.

IIRC some company got $200k for doing exactly that, just posting basically some dumb shit people did, without even blurring out the people in all of the videos. Which triggered an audit that discovered that way more people that should had access to security footage.

Similar things apply to storing and analyzing logs for security purpose. They are considered "greater good" under article 6 and you can store and use them for, say, blocking malicious users on WAF, but you can't then go and handle it to your marketing team and you would have to anonymize it if you for example wanted to give those logs to developers to debug problems.

And a small EU salesmen had to shut down because he was using Shopify…

Citation needed, who is this non-existent salesman?

I see sites all the time that use shopify at checkout to handle the sensitive financial data. That's literally what they do for their clients.

any chance can they take Microsoft with them?

and the Tesla self driving

I've spoken to many people in the USA that are anti-union and anti-regulation.

The impression they have is that regulation is used to stomp out small businesses (and unions have some propensity for corruption).

The EU regulation does not come with the same feelings, aside from the very aggressive FUD around GDPR. EU regulation is typically seen as something for the consumer/public good and not intended to stifle competition.

With that in mind, it's interesting to see him arguing for regulation in the US, yet against regulation in the EU.

Very important to keep in mind that a large part of European regulations are meant to encourage competition within the EU. I'm not clear on the results but the goals are very often clear. There are also moral and ecological stances, but still trying to impose them in a way that doesn't stiffle competition within the EU.

The difference in philosophy is part of the issue. The EU leans towards the precautionary principle (first, show us that it is safe). The US leans towards reactionary regulation (something terrible happened, let's make sure it doesn't happen again).

There is also a large difference in practice. In the US, it is possible to get rid of regulations via lawsuits or "lobbying". It is also possible to corrupt public discourse with large amounts of advertising (see https://news.ycombinator.com/item?id=36067401). It is much harder to apply US-style lobbying to EU regulators (one of the motivations for Brexit was to get away from EU regulators), so American companies will try very hard to get out from the regulations while making money from EU customers.

What he wants is toothless & after-the-fact regulation that will not reduce his income.

Feelings from whom? Because for sure that's a very common feeling in Europe about EU regulations. It was a big part of the political campaigning around Brexit, even.

You even admit this - nobody feels this way, except for the people who very strongly feel that way but their feelings are "FUD" so don't count?

it is literal fear, uncertainty, and doubt.

GDPR often gets blown out of all proportion by people who seem to have a vested interest in having the regulation scrapped.

This played out the same with the EU cookie directive which was intended to give people consent on how they were tracked- with massive carve outs for legitimate use, the intention was that people would cut down on that tracking to avoid needing a popup; but ended up in some form of malicious compliance where everyone just put up pop-ups instead and tried to hide the opt-out with dark patterns.. massively overblowing what the regulation even entailed in the first place by claiming that the pop-up was necessary.

GDPR creates a lot of FUD because:

- The penalties are massive: fear

- The wording is vague: uncertainty

- What to do is unclear: doubt

That's entirely on the EU. I have no vested interest in GDPR one way or another at the moment and yet having been involved in assessment projects in the past, it is a crap law that makes the EU look incompetent and stupid. Nobody can figure out what the hell it means in all kinds of common scenarios. HN is full of people saying it's simple; I have yet to encounter any of these people inside actual enterprises, doing actual implementations of it. People who think it's easy are invariably eurofans engaging in wishful thinking.

> the intention was that people would cut down on that tracking to avoid needing a popup; but ended up in some form of malicious compliance

That's not how it was. The EU passed an incompetent law with unclear goals that kills the ability to do business of anyone who doesn't implement the banners. Cookies are in fact necessary, not optional, not nice to have. They are required. Thus everyone implements the banners, the internet got worse, the EU now looks foolish in front of the world and yet can't accept it.

The problem is, everyone talks about the GDPR but they don't actually read it.

If you read it, it's really clear: if you're not sending information to third parties then it's exceptionally easy to be compliant. A lot of things are "to best effort" which puts the burden of proof on the prosecution that you were grossly negligent.

There is a dedicated website to explaining the regulation for dummies; https://gdpr.eu -- the entire thing is 11 chapters, half of that is for governments to harmonise their national regulations to be GDPR compliant without people needing to invoke "GDPR", the remainder is a mix of common sense and ensuring digital privacy and consent. It's really clear for a legal document.

I should know, it's my part of job to read them and to ensure my company is compliant. It's not hard. The most loud people about this are lawyers who have a financial incentive to make it as hard as possible.

It's hilarious that you think standards like "best effort" are clear. The lawyers in your company are loud about it because you have no way to know if you're in compliance or not. You're just hoping you are. The whole concept of compliance doesn't even exist because GDPR says nothing, so at any moment something that sounds reasonable can be considered no longer in compliance and there's no way to predict this. A standard like "reasonable effort" or "best effort" can change simply because your competitors started doing something you weren't even aware of.

Even basic things like backups are made unclear by GDPR! Truly, a more incompetent piece of legislation is hard to find. The only people who disagree on this are people who have hopelessly naive views of what regulators are like, and assume they'll always be friends. Nope. You have to read these rules as if the people who are enforcing them are completely unreasonable sadists. The lawyers understand that, you don't, so I hope you aren't in a position where you may lose your job in case of GDPR violations.

Best effort has a fairly precise legal definition:

The principles for satisfying “best efforts” standard have been enumerated as follows:

* "Best efforts" imposes a higher obligation than a legal "reasonable effort".

* "Best efforts" means taking, in good faith, all reasonable steps to achieve the objective, carrying the process to its logical conclusion and leaving no stone unturned. However, it does not require a party to sacrifice itself totally to the economic interests of the party to whom the duty is owed, although the interests of the other party must predominate.

* "Best efforts" includes doing everything known to be usual, necessary and proper for ensuring the success of the endeavour.

* The meaning of "best efforts" is, however, not boundless. It must be approached in the light of the particular contract, the parties to it and the contract's overall purpose as reflected in its language.

* While "best efforts" of the defendant must be subject to such overriding obligations as honesty and fair dealing, it is not necessary for the plaintiff to prove that the defendant acted in bad faith.

* Evidence of "inevitable failure" is relevant to the issue of causation of damage but not to the issue of liability. The onus to show that failure was inevitable regardless of whether the defendant made "best efforts" rests on the defendant.

* Evidence that the defendant, had it acted diligently, could have satisfied the "best efforts" test, is relevant evidence that the defendant did not use its best efforts.

Mere reasonable efforts will not suffice to meet the “best efforts” standard. Neither will occasional efforts made from time to time suffice. A higher level of effort is required.

Examples of situations where the standard was not met include:

* Contract for the purchase and sale of a house required the purchaser to obtain financing. Court found that the purchaser failed to do so because he applied for a loan for an inflated sum (by 60%), delayed the loan application, and that true reason for these actions was that the purchaser simply did not want to buy the property because he received a negative appraisal of it.

* Contract for the purchase and sale of land. The contract required the vendor use best efforts to obtain subdivision approvals. Obtaining approvals was also a condition precedent in the agreement, meaning that the parties could walk away from the transaction if the approvals were not obtained. The vendor hired an agent to obtain approvals, but the agent failed to advance the process. As time went on, the price of the land went up and the vendor notified the purchaser that it ought to be discharged from the obligation to close the transaction due to not receiving approvals. The Court found that the Vendors did not use best efforts. The court commented that to satisfy the standard, one must show progress, as well as reasonable and sensible response to roadblocks.

Examples of where the standard was met:

* An agreement to lease required tenants to obtain a business license to operate a car dealership on best-efforts basis. The tenants submitted an application within 3 days of signing the lease, did so accurately, but were told that the process would take significantly longer than anticipated. Best efforts standard was met.

Generally, there are more decisions where this standard is found to not have been not met than where it was met. Shy of impossibility, the best efforts standard imposes onerous obligations on the party that agrees to be bound by such a clause.


The right to be forgotten does not apply to data that is not being processed, so your backup situation is clear too and if you ask then you get pretty clear answers. Obviously if you restore the backup you would be expected to re-process right to be forgotten requests, which can be stored for the duration of your backup.

Your lawyer has well and truly convinced you that it's difficult, it's not, and if you have a serious question; you can send it to your countries DPO office: https://edps.europa.eu/data-protection/data-protection/refer...

The backup question, for example, was clearly (publicly) answered by France's DPO.

To be clear here: I don't think you've actually interacted at all with the apparatus surrounding GDPR. It's not my lawyers who are loud, it's lawyer consultants- it's lawyers of companies who's business model is not compliant (trading personal data). It's people in the US and UK who are trying to spread as much uncertainty as possible to either make the EU look incompetent or non-competitive and/or weaken the bloc.

It's absurdly easy to be compliant. For all the whinging, if you're taking the necessary steps to secure data that you should have always been taking, and you don't sell peoples data, then the only thing you need to bake in is a "delete account" function- even things like legal records are exempt so transaction logs are not subject to GDPR. Just read the text.

None of those definitions are precise, they are actually circular? I don't really understand how to debate with someone who can't see that, we're just divided by something much deeper than some bullet points. Precision doesn't come from volume of words, it comes from clarity of thought. Defining "best effort" as "things known to be usual" is just playing with a thesaurus, it's not actually providing clarity.

You need to read more legal documents mate.

I guess the reason people are scared is that GDPR is the first legal text they ever came into contact with. These are really common legal terms which have huge numbers precedence attached.

Oh no! This time we really have to be 'Open AI' and actually be 'open' and 'transparent' about our models and training methods this time?

More regulations for thee, but not for me (O̶p̶e̶n̶AI.com). But little do the regulators know that the majority owner (Microsoft) will lobby and bribe them a little to have regulations to benefit both O̶p̶e̶n̶AI.com and Microsoft, etc over everyone else.

I’ll be glad if EU regulates the sh* out of this. Companies like OpenAI should not have a reason to exist in today’s world.

The implicit threat is fascinating. He clearly believes in a higher authority around what is or is not acceptable regulation. From whence comes this magical source of authority and reason? I mean sure, arbitrary law is bad. But does he truly ideate the EU like that?

I wonder what jurisdiction he winds up in and why?

Altmann schreit eine Wolke

(old man yells at cloud)

Regulation that puts up a barrier to entry that prevents competition: good

All other regulation: bad

This ironically might be a boon for the open source models. The problem seems to be privacy related so companies will be forced to run their own models instead of OpenAI, Google and Microsoft owning all the AI apis.

No no no! Sammy totally wants regulation. He just wants to help wording it ;)

He knows perfectly well that the EU is able to pass regulation that will sting whereas the American federal government is never going to pass any meaningful tech regulations. We've seen it with social media. The EU has GDPR, California has CCPA, Montana has banned TikTok yet after two decades there's still not even a national privacy framework in the US.

While I agree with other commenters that this is kind of hypocritical on its face, there is a strongman interpretation that’s more generous. It would probably go something more like this:

The US crafts regulations that work with businesses while the EU crafts them and is OK to sacrifice businesses. Additionally, the EU crafts regulations that can accomplish nothing but increase costs and reduce quality without helping the end goal. For example, all the cookie banners that have pervaded the internet. They don’t actually help reduce tracking, they make the web less usable, and they’ve made compliance more complicated, and building a website is more expensive as you have to add these banners. Sure GDPR has strengths but the cookie aspect is a miss. In fact, the EU is as easily bamboozled as the US considering that Google had a huge hand in shaping GDPR legislation.

Another strong man. The US hasn’t started considering how to regulate so Sam is saying “great, here’s how you regulate” in the EU they’ve already outlined principles (proposed regulations? Can’t recall) and maybe he sees those as problematic and that his discussions with them haven’t yielded improvements.

Of course even the strongman interpretations are problematic here but just saying. I think there may be more nuance here than the hot take based on the headlines alone.

> …he was miffed by the way the European body defined “high-risk” systems..

> According to Time, the OpenAI CEO said “If we can comply, we will, and if we can’t, we’ll cease operating… We will try. But there are technical limits to what’s possible.”

> He called for some regulation “between the traditional European approach and the traditional U.S. approach,” whatever that means.

It’s important to catch the author’s non-neutral tone here. They’ve clearly staked out a position and it’s worthwhile to keep that in mind.

Yeah, sure. Like Microsoft does not like EU money. Though is funny he asks USA for regulation to regulate his competitors.

He does not want the USA to regulate his company. He will lobby against every attempt to do so, he’s proposing a new agency specifically because it’s the most unwieldy proposal possible.

This has been the tech playbook for close to two decades now, it’s time we stop participating in the charade.

I get the impression Altman would like the US to regulate other AI companies, so that only the "responsible" ones can provide AI based services (i.e. OpenAI and Microsoft). It would take care of his moat problem.

I think he will be happy to split the market with Google and Apple, but he probably is scared of Open Source, most server run Linux or BSD and open source programming languages, and most server stuff is based on open source libraries. Would be the same for LLM, you would have many small but very specific LLMs.

So, he wants regulation unless it hurts OpenAI's bottom line?

So he doesn't really want any regulation at all.

Clickbait title that leads to an inflammatory, divisive article which doesn't explain much.

EU market is too big to ignore. We have seen this with Meta and others play out multiple times since GDPR has been increasingly enforced in the past decade. That's why it's important the EU keeps pushing, hopefully mostly sensible, regulation. There is no other counterweight that's aligned with consumer interests rather than corporate interests.

Sure, you can accuse EU of hidden protectionisms blabla, but for now, this is better than nothing.

> EU market is too big to ignore

It's not what it used to be.

The EU's makes up around 15-20% of world GDP depending on how you measure it, but that's subject to disproportionality high taxes and compliance costs.

Meta only gets around 10% of its ad earnings from the EU [1]. I'm not aware of the figures for Google and Microsoft or whether such figures are even available.

Many global banks left the US or reduced it to skeleton operations after the wave of regulation following '08. Unfriendly enough behaviour might cause a similar reaction from tech in the EU.

[1] "What we do know is that roughly 10% of worldwide ad revenue comes from ads delivered to Facebook users in E.U. countries", Meta CFO Susan Li quoted on stratechery, https://stratechery.com/2023/metas-low-e-u-arpu-the-supreme-...

I arrive at something like 25% of global nominal GDP according to Wikipedia, but I just quickly added up the numbers in my head, might be wrong. The Facebook ad numbers are probably not too relevant of a comparison though, since Microsoft and OpenAI have a different business model. I'd suspect the share of potential global revenue must be something in the >20% range.

But even if the EU was only 10% of the potential revenue - taking a 10% hit is not something large corporations do particularly willingly. I bet as long as they can find a way to make more money in the EU than they spend, they will. It's a business decision. Doesn't mean there won't be any whining.

> I bet as long as they can find a way to make more money in the EU than they spend, they will

I think this is where the structure of the regulation becomes important.

If GPT-4 had to be retrained with regulator friendly data or assume liability for misinformation in the EU then there might not be enough of a profit incentive to stay.

I imagine they'd stay in the EU however if they just had to add more banners, data localisation and reporting. Albeit with whining.

Given how many companies were saying they would leave Russia versus those who actually left Russia there's a gap between what companies say and what they do.

> but that's subject to disproportionality high taxes and compliance costs.

Well, disproportionality [sic] is subjective at best, and those high taxes get evaded within EU. All big corporations do that, btw.

VAT, employment taxes and regulatory fines are hard to avoid paying in the EU, even if corporate profit taxes get moved around a bit.

"moved around a bit" well, the amount which gets moved around is enormous. I don't know how its done now (I do know its still being done), but the way it was done was it was getting moved to The Netherlands to Ireland to The Netherlands. There's even a Dutch term for such a company: brievenbusfirma, and the location knows for them residing: Amsterdam Zuidas. So we get the curious situation where Ikea is a Dutch company.

You're crying wolf about "disproportionality high taxes and compliance costs" but these examples of yours are drops in a bucket compared to the tax evasion severity.

VAT is the largest tax most software companies pay and the one most difficult to avoid. EU VAT rates are some of the highest in the world.

Corporation tax may be avoided but that's true of every geography so doesn't affect the comparison of EU vs non-EU.

About compliance costs - the majority of >$1B tech fines come out of the EU.

Tell investors you're throwing away 10% of the market and they will fire you

> The EU's makes up around 15-20% of world GDP depending on how you measure it, but that's subject to disproportionality high taxes and compliance costs.

That would have to be corrected for tech spendings, for example the fact asia produces most of the electronics have zero effect on facebook's business profits.

Google search and many other big tech divisions already did that with China.

When a country/region deviates from the rest of the world in a big way there comes a point at which you can't offer the same products and services as other markets.

There is a big difference between "censor everything you do to comply to goverment doctrine" and "just don't store PII unnecessarily".

The latter is actually very burdensome because it affects advertising profits

I'm still waiting for Google/Facebook to leave Europe.

Never seems to happen, despite the incessant whining. Maybe the fines were not heavy enough.

Paying a billion dollars per year in fines is still cheaper for Facebook than leaving the EU.

shrug at least they pay tax now even if in weird way...

> Tell investors you're throwing away 10% of the market and they will fire you

Really not. 10% extra market share is something for large companies to worry about, the sort of thing that requires big spend in localization. For a company the size of OpenAI, there are bigger fish to fry. Europeans will use VPNs and route payments via resellers rather than not have access to the latest tech.

EU has gotten steadily smaller as a percentage of world GDP over time. GDP per capita used to be higher than the US in France and Germany and the other big EU countries, now its close to 33% lower.

Sam: "Please regulate AI"

EU: "Ok then"

Sam: "No, not like that"

sama showing musk there’s a new dr. evil in town.

nothing more than an oligarch dancing with government structures in an effort to prevent more equality.


I can see them mandating another GDPR like popup saying that the content is AI created. Europeans have the right to know if they are being bamboozled by AI created content.

So will I.


Yep, though I wouldn't call him names. Stay civil; its not needed to call names to make your point.

In a nutshell: he asks for it to please those who are concerned yet on the fence. Whereas he won't give up in EU without a fight.

Also, he's American, not European. His interest could very well lie in whatever benefits the American people primarily, instead of whatever benefits the rest of the world. As long as he gets away with it.

I mean, Facebook and Google are getting away spying on users all over the world.

Good. Piss off then.

In traditional European fashion the EU is again likely to crush any emerging European AI venture with overburdening regulation while companies in the US flourish. For decades now the American approach of wait and see has proven superior (with few exceptions for sectors like pharmaceuticals). When the government waits with regulation until more clarity has been established to what the actual business models are going to be like then you give companies a chance to establish themselves versus the old ones. The European approach just scares new companies off and they'll try out their new ventures in different jurisdictions.

Eh, I'm not convinced.

I do pay for ChatGPT, but I'm not onboard with corporate whining by spoiled CEOs from across the pond. I certainly am not a fan of US lax regulatory approach that only seem to ever favor corporations in detriment of the population with this lame excuse of not scaring away poor corporations.

If adjusting to regulation is impossible, don't do business here. It's alright.

Applications are open for YC Summer 2023

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact