Hacker News new | past | comments | ask | show | jobs | submit login
EU proposing to regulate the use of Bayesian estimation (columbia.edu)
119 points by tosh on April 25, 2021 | hide | past | favorite | 105 comments



From page 43 of the report:

The following artificial intelligence practices shall be prohibited:

(a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm; (b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;

(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

(d) the use of ‘real-time’ remote biometric identification systems in publicly accessible spaces for the purpose of law enforcement, unless and in as far as such use is strictly necessary for one of the following objectives: (i) the targeted search for specific potential victims of crime, including missing children; (ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of natural persons or of a terrorist attack; (iii) the detection, localisation, identification or prosecution of a perpetrator or suspect of a criminal offence referred to in Article 2(2) of Council Framework Decision 2002/584/JHA62 and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least three years, as determined by the law of that Member State.

Sounds good to me.


I find this difficult to parse. Specific to (a) what are some of the examples where this law would come into play?


AI system that searches for most addictive setting of various levers in supposedly free-to-play game, in order to raise the amounts of in-game purchases made by end users.


That depends on the definition of

> psychological harm

Are they going to go into a codebase and figure this out? If I use AI to pull these levers vs using a series of if/else statements:

  if person.has_bought_multiple_items

    increase_base_price(person)

that's legal I guess?


The wording looks to me like it's designed to combat the feedback loop of "observe user behaviour" -> "apply ML model" -> "reoptimize addiction loop"


That’s... not true. The EU has clearly stated the need to regulate the group of technologies broadly marketed as “AI” and that the rapid evolution of the technology means there is not point to make an exact definition of it at this time. Basically a “we’ll know it when we see it” approach


Point (c) of ANNEX 1 includes Bayesian Estimation.


Point (c) of ANNEX 1 refer to Article 3. The author quotes a definition from page 60, so... Article 31, 32, or 33?


I think the "page 60" part is just a mistake by the author. The text they're quoting is on page 39, just above footnote 60.


Sorry, I don't understand. The first quote is from page 60 in document X, and the second quote is from page 1 in document Y, where X and Y are both behind the same hyperlink.


The EU is proposing to regulate the application of certain techniques to certain fields.

There is too much vagueness in those definitions but the authors should pause to think about what else is regulating the ethics of using ML or even just basic statistical modeling to public service.

What if your car insurance was more expensive because your last name was associated with an ethnicity that happens to have more car accidents than others?

What if your sentence at a public trial was determined by an algorithm that did the same?

Those things we wouldn't consider as fair, and we know that the blind application of ML to socially meaningful tasks leads exactly to these kinds of problems. I'm not sure what the author's better solution is, they just seen to be flailing about...

If the ML community can't get its shit together and address fairness and ethics generally as diligently as they do precision and recall, I'm afraid that poorly-written regulations will be coming their way with a ton of popular support behind them. Misguided as experts might think that those regulations are, they're the ones playing with fire in the first place.


> What if your car insurance was more expensive because your last name was associated with an ethnicity that happens to have more car accidents than others?

But why is it fair to charge men more than women? AFAIK car insurance is more expensive for men


Whether consideration of gender should be allowed in setting auto insurance rates is indeed the subject of debate. It seems to be allowed in most US states but illegal in the EU https://www.investopedia.com/gender-and-insurance-costs-5114...

Back to the point of regulation, the fact that we're able to have a conversation about the factors that are/aren't allowed to be taken into account relies on the pricing model being relatively transparent about its inputs. Unrestricted machine learning leads to better predictions but often at the expense of transparency about what exactly is going on. If you feed first and last names into a model for example, it might learn things that correlate very closely with ethnicity


We can not see this lightly. What I see happening in my organization is the juridification of our work in AI and advanced automation. Is it not true that every piece of software code is a representation of knowledge in the purest form. Where are we heading? And if these politicians find their way, not ever have to create a solution let alone a smart infrastucter or code breaking like Turing did saved many lives and won us a war. If the burocrats are going to rule... And in my world AI is not the bad genius, the data and the business goals are the elements to question. For sure the big tech comanies are meant when this manypager was created, and law and behold they have the money to sew, turn and find ways around the rulse, like they do now. So the real question is how to create a safe/ meaningful/ trustworthy human technology collaboration. To blame AI is a too simple approach..


As a syllogism:

- The EU is regulating the use of AI

- One use of Bayesian estimation is in AI

- Therefore, the EU is regulating the use of Bayesian estimation

Pappy Aristotle: >facepalm<


Unfortunately the blog post does not make it clear what the requirements for a regulated entity are. And when the regulation applies at all. E.g. if an academic paper or an open source email spam software would be regulated (and subject to large fines according to the blog comments).

Can anyone shed light?


EU loves to stick it to American tech companies.


Either that or American companies have still not learned what does not fly with the EU and repeat the same mistakes over and over again to the point that we need laws to prevent or fix the problems that they create.


Google, Netflix, Microsoft create so many 'problems' that EU citizens feel compelled to use these products every day and send those companies tons of money?

There are some fairly minor externalities in some of those companies products that we could look at regulating but by and large, it's not existentially problematic.

The 'scary' parts of AI probably haven't come to pass just yet, and I think it might have more to do with privacy than 'biased algorithms'.

Alongside this legislation, the EU should also be trying to figure out why it's using all of those American products instead of having champions of it's own because that's probably a bigger issue in the end.


[flagged]


Hehe, I see what you did there. At least I think this is alluding to Einstein allegedly saying that „insanity is doing the same thing over and over again while expecting different outcomes“.


It's because EU regulations are largely teethless. If disregard for GDPR lead to closure of Facebook, that would be a different ballgame. Now it is just additional cost of doing business to appease the EU bureaucrats and make sure any issue grinds to a halt in unsustainable EU bureaucratic machinery.


Well, American tech companies are not the only ones using AI/ML for morally questionable activities. Think for example about China's social crediting system. At least it is good that people are thinking about the effects that AI/ML can have on (the freedom) of peoples lives.


EU has universal healthcare therefore prevention makes $$$ sense.


Perhaps that's because American tech loves to stick it to the rest of the world? Just speculating.


Would be amusing if someone turned off their email spam protection in response.


Wow, that's a pretty badly researched post. And not one of the comments corrected it, during the three days it's been up. Instead they mostly gloated in a rather shallow and uninformed way.

Typical example: "The thought police have finally come to stop us from updating our beliefs…". Is this idiocy/jingoism a columbia.edu thing or is it isolated to their statistics department?

The meat of the the proposal starts at page 43 (https://ec.europa.eu/newsroom/dae/document.cfm?doc_id=75788), which begins with:

The following artificial intelligence practices shall be prohibited: (a) the placing on the market, putting into service or use of an AI system that deploys subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour in a manner that causes or is likely to cause that person or another person physical or psychological harm;

(b) the placing on the market, putting into service or use of an AI system that exploits any of the vulnerabilities of a specific group of persons due to their age, physical or mental disability, in order to materially distort the behaviour of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm;

So, yes, the EU is proposing regulating the use of e.g. bayesian estimation in implementing these practices.


Most headlines about the EU are inaccurate attacks. See bendy bananas etc.


EU regulation 2257/94 covers the abnormal curvature of bananas. There is no myth here, the EU does indeed regulate the straightness of bananas and has done for a long time.

As for this law, it is going to be heavily criticised and this rather polite post from Gellman is just the beginning. Even the quoted part in the top voted comment is absurd. What exactly are these "subliminal techniques" that need regulating? How do they work? There are no AI papers I recall that discuss how to build a mind controlling AI, let alone one that's designed to cause people psychological harm. Does the EU really believe in this stuff? I thought government belief in subliminal messaging and mind control disappeared with the CIA's LSD experiments decades ago.


It does not regulate any shape of banana though. It standardises what was being used as a classification system. A banana that is S shaped is not banned, but you can't call it Extra.


Regulation does not mean the same thing as banning.


Wikipedia has a pretty good article on the banana issue: https://en.wikipedia.org/wiki/Commission_Regulation_(EC)_No....


> There are no AI papers I recall that discuss how to build a mind controlling AI, let alone one that's designed to cause people psychological harm.

https://www.pnas.org/content/111/24/8788.full

"Experimental evidence of massive-scale emotional contagion through social networks"

"Core Data Science Team, Facebook, Inc., Menlo Park, CA 94025; and Departments of Communication and Information Science, Cornell University, Ithaca, NY 14853"


That's probably the closest but it's not AI related. They just tweaked how often certain posts appeared based on the smileys attached to it, which doesn't involve AI or even statistics really.

Also, even if you consider the experiment unethical, it boosted as many people's exposure to positive emotions as to negative, so it's hard to claim that this is an attempt to create psychological harm. Any law that outlawed this would presumably outlaw all of psychology. And finally, it's a psych study. Is it right? Would it replicate? Who knows, by design it's not replicable. It would be bad policymaking to impose draconian laws on an entire continent based on a single questionable psych study that doesn't even have much to do with AI in the first place.


Under this proposed regulation, it doesn't have to be AI. "machine learning, logic/knowledge, statistics", as long as the goal is on the list of bad stuff in the regulation.


>subliminal techniques beyond a person’s consciousness in order to materially distort a person’s behaviour

Doesn’t this generally describe all contemporary forms of advertisement? Or, for any given advertisement, how would you demonstrate it is out of scope of this qualifier?

> in a manner that causes or is likely to cause that person or another person physical or psychological harm

Physical is pretty obvious but psychological harm seems slippery.

Not saying I’m against the idea, but this language seems like it needs a little work.


My two cents are that it seems like a pretty good start.

It'd probably be possible for well-funded organizations to use automated data collection and clandestine communication methods to collect detailed profile information about people and then disseminate it in a way that attempts to make people fall in line with some particular ideology; or simply to harass people (wasting their time, which eventually amounts to a form of attack).

Unlikely though that might seem, if such things are becoming technically feasible, then it seems worthwhile to put safety regulations in place to prevent them from taking too much hold.

Enough automation of such systems could make it very difficult for individuals -- perhaps entire groups -- to backtrack out of whatever mindset the systems are leading them towards. Let's try to avoid those kind of outcomes.


> Doesn’t this generally describe all contemporary forms of advertisement? Or, for any given advertisement, how would you demonstrate it is out of scope of this qualifier?

I guess if one is told (clearly) that "this is an ad" then it's probably fine?


It probably does and it probably includes all political advertising as well, thereby rendering any ads run by the entities promoting this effort possibly illegal.

The wording is far, far too broad in scope and is guaranteed to cause major headaches for everyone as it spends decades in the courts being refined to the point of reaching some degree of clarity.

There's some interesting impetus here, but this one needs a lot of work - it might be worthwhile to focus in more on specific areas, such as ads themselves, or registration data collection wherein clarity might be more easy to achieve.


Advertisement doesn't cause psychological harm in this context, otherwise you would get sued for abuse over phone advertisement.


What’s the standard for harm? I think that’s the part that isn’t clear to me. What if an advertisement diminishes my self-esteem to some degree?


If we could quantify the amount of time and distraction caused by advertising, on aggregate, it's possible that opinions would change.

A few companies do already have access to plenty of data that they could use to calculate these kind of reports.


Why are they regulating only AI? They should wide the spectrum of the regulation and include, for example, graph theory, combinatorics, probability theory, etc and any other branch of discrete mathematics or even mathematics and statistics in general. They look like pretty good tools to perpetrate those crimes too. Or, wait, are those “practices” even classified as crimes but current laws? How many people and organizations were fined or incarcerated for those last year? Let’s suppose they are. If someone commits one of those crimes hardcoding a system or simply using some random generator, or simply sending handwritten letters will this regulation apply? I really wonder how many AI-based systems the people proposing this regulation have really built.


The regulation includes things like "specialist systems" so yes, "graph theory, combinatorics, probability theory" on that context counts as "AI" (this is in TFA)


Surprised that causing physical or psychological harm (in any way) wasn't already prohibited.

Isn't this just language to make the law more explicit?


Directly causing harm is always illegal, but you can't outlaw all indirect ways to cause harm since it is unclear if it is the suffers fault or not. In this case this law argues that people shouldn't be expected to learn to cope with these practices and instead tech companies should stop doing these things.


Then can't we extend this to all forms of psychological manipulation? Like coercing people into buying stuff they don't need?


Try defining that, because you can't. If I go into the store and buy something I didn't go in there for, was I manipulated by seeing it on the shelf? Are promo videos on an online game store "psychological manipulation" because they can make you buy a game you didn't need?


A/B testing to the rescue!

    A (control): rate of purchases of the item WITHOUT the ad present.
    B (experiment): rate of purchases of the item WITH the ad present.
If A and B differ significantly then by definition the ad manipulates purchasers. And we all know companies track exactly these metrics, ripe for subpoena.

(Separate question is whether this practice is bad for consumers; I'd argue "not all ads are bad" and "no ads is never bad" so to minimize harm, we should adopt "ads are bad until proven otherwise.")


Yes, that works with ads, but the person I replied to was referencing any form of manipulation. Because what counts as manipulation? I referenced two examples of things that average consumers do, but generally wouldn't say they've been "manipulated" to do so.

And that's before you even get into the question of "what is an ad?" and "are all ads bad?" (which you mention) Because promo videos for things such as video games are common, but I wouldn't call them ads (per se). Are music singles ads for the whole album? Historically, music videos were called "promo videos" because their intent was: "we air this on MTV or the FM radio so that people will buy the album." Am I being advertised to there? Maybe? But what if I end up finding a new band I like?

You can't define this cleanly.


Manipulation requires intent and action:

    "I believe they won't do B unless I do A, so I do A."
"I believe they won't do B unless I do A" on its own is harmless—there's no action.

"I do A" on its own is unintentional. Does anyone really act without intent?

Legal systems have (imperfect) ways to ascertain intent (AKA premeditation), causality, and harm, so they can deal with the concept of manipulation.


This would outlaw advertising that is effective then, since the entire goal of an advertisement is to maximize the difference between A and B. So to do any advertising you'd have to prove it wasn't very effective? Sounds similar to types of proposed gun regulation: "your guns can kill people, just not too many too fast too effectively"

Not that that means it's necessarily a bad approach, just never heard of marketing being limited by its effectiveness. Similar to having to put those gross pictures on cigarette packages now.


Not quite: effective (AKA manipulative) advertisement would be fine if you can show the behavior the advertisement causes isn't detrimental. Advertising a new treatment for some disease is a net positive; advertising cigarettes is not.

I'm really just suggesting we flip the burden of proof from harmed consumers to advertising companies.


Nice approach. To extend on this, I wish we could scientifically determine how much overconsumption is caused by advertising, and hence how much unnecessary damage to the planet.


“All” is probably too broad as it would include changing anyone’s mind ever for any reason, but I hope that particularly vulnerable people and known particularly potent forms of distortive manipulation would be.


Doesn't it make Facebook illegal?

It causes people psychological harm and uses AI.


They would need to stop doing those things to EU citizens, I guess.


By subliminal means?

I guess you could argue that it is abusing some peoples lower intelligence when it suggests relevant groups for them, but that is probably a stretch.


It is worth noting that in 2012, Facebook conducted psychological experiments on its users. So they have that capability. And god knows who else.

https://www.nytimes.com/2014/06/30/technology/facebook-tinke...


Emotional state is probably an input to advertising/content engagement.

Algorithms maximizing engagement probably then grow to manipulate emotional state as a proxy variable. Even if they weren't aimed to.

From there, if an optimal emotional state happens to be "An impending sense of doom, terror of the pervasiveness of crime in my country" other "doom scrolling" etc, the algorithm could we'll be providing psychological harm.

(No idea if this actually applies to this doc, as it's language and aims may be technical)


I'm not sure it satisfies the criterion of "intent" (in order to in the quoted text).


The problem with all these EU regulations is that they are overly detailed. If you have to mention "Bayesian estimation" in a document called a "directive", then you are doing something wrong. In an ideal world, a directive is a high-level document that outlines what ought to be done in broad strokes. Then, the national regulations answer some additional questions accounting for the local circumstances, but leave the fine-grained details to be filled in by legal scholars and the judicial system. In a properly setup legal system that actually honors the principle of subsidiarity, the question of whether Bayesian estimation falls under the law or not would be left to a judge who would look at it in the context of a specific case. So while the article gets a lot wrong, the author's intuition that it is wrong to mention "Bayesian estimation" in a supposedly high-level regulation is correct.


The methods that in this proposal would be prohibited to use for certain pretty bad purposes reads like this:

"(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;

(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;

(c) Statistical approaches, Bayesian estimation, search and optimization methods."

I find this (machine learning, logic/knowledge, statistics) to be pretty a pretty good definition of what they're after. The fact that very specific things are also mentioned does not seem like a red flag to me - it's the way laws are written, with examples, to give guidance to courts.


No, it’s the way laws are written in a broken legal system. In a well-designed, principles-based legal system, laws are kept short and concise, with guidance and examples being provided through more suitable channels.


When I first saw this going around before seeing it on here, my first thought that it's still a bad take because how else are you form your inference without data collection? You're not just going to be given your conditional probability; you're almost certainly going to have to go out to collect the data to construct this (and the data collection being the regulatory item covered).

I think that the hot take machines that are twitter/online communities have a certain amount of mindrot where poorly researched topics make headline news concerns me (which tbf is in itself a hot take I suppose).


Was this where you first saw it? Not many Twitter hits for "eu bayesian estimation".

https://twitter.com/AlecStapp/status/1385960305663090689


100%. PPI needs to hire a tech policy director whose knowledge can expand outside of section 230.


So, democrats. Huh.


Your quote stops short of the much vaguer subsection (c):

(c) the placing on the market, putting into service or use of AI systems by public authorities or on their behalf for the evaluation or classification of the trustworthiness of natural persons over a certain period of time based on their social behaviour or known or predicted personal or personality characteristics, with the social score leading to either or both of the following: (i) detrimental or unfavourable treatment of certain natural persons or whole groups thereof in social contexts which are unrelated to the contexts in which the data was originally generated or collected; (ii) detrimental or unfavourable treatment of certain natural persons or whole groups thereof that is unjustified or disproportionate to their social behaviour or its gravity;

Consider, for example, a supermarket chain that uses Bayesian inference to determine which products in which stores are theft-prone and need to be locked up. I think there's a strong argument to be made that violates (c)(ii).


Your example wouldn't fall under, because the decision is not applied to humans, only to the products - i.e. there's no "credit scoring customers" and thus impacting their ability to buy those items, only decision on whether or not to have them in locked boxes.


Not on the level of an individual person, but on the level of groups of people. People in one neighborhood would have shops with locked boxes and people in another neighborhood would have shops with unlocked boxes. And those neighborhoods could very well have different ethnic compositions.

In this case, you can still buy an item when the box is locked, but the same thing could lead to stores in some neighborhoods not carrying commonly stolen items at all.


Yeah, on further research I think you're right. I jumped the gun a little here.


The difference between the Marina Safeway and the Mission Safeway makes it clear this isn't a property of the products alone.


The point is applying it to people, with effects on the people themselves. Take for example shops that would decide whether you can be served based on face-matching AI. And it's property mainly of impoverishment of the area.


Perhaps it's different in the EU but here in SF that is seen as systemic discrimination. But if it is different in the EU, then I won't judge.


Well, it wouldn't fall under this specific proposed law.

I'm not going to guarantee that it wouldn't fall against something else.


I suppose the slippery slope was something I was concerned about but it doesn't matter.


> by public authorities or on their behalf

A supermarket chain is not a public authority. This prevents governments from creating systems to evaluate the trustworthiness of people, not private businesses.


Why can't they just ban tracking? The way it is worded it would have to be proven that the AI exploits a specific group, which can be subjective. Whereas tracking direct goal is to extract as much money as possible and use AI to optimise.


Its Columbia. They usually hate any and every regulation. I wonder what if any is the rational reason for doing so.


A university for the privileged embracing toxic libertarianism? Color me shocked.


Well, basically if you won't self regulate you get (bad) lawmaker-made regulation.


I think what this blogger is concerned about is bad actors ruining it for everyone. So a few manipulative businesses do not self regulate, and many legitimate use cases of AI are lost as collateral damage.


> So a few manipulative businesses do not self regulate

A few, like the entire ad industry, Google and Facebook? :)


I cannot follow OPs reasoning. This is consumer protection related, not so much science related. If this further underpins the right of a consumer not to be unduly subjected to algorithmic decisions without proper checks and balances I'm all for it.

Devil is in the detail though. The GDPR is practically a dead letter with thousands of complaints and infringements and hardly any substantive action.

[1] https://digital-strategy.ec.europa.eu/en/library/proposal-re...


GDPR is not simply dead, but also wastes time of billions of people daily by displaying useless prompts that do not provide any real benefit.

OPs reasoning is that when government tries to regulate complex things that it doesn't understand the result usually is something useless like GDPR.


Well that's plainly false. Contracts are complex and private law can solve most disputes handsomely. Insurance contracts are complex and the valuation of those contacts as well and the European legislation has made insurers both more resilient and comparable. It took a decade to get there though.

There's no reason to presume that the use of AI in firms is too complex to be legislated. Basically I'd hope it turns out to have both a ex ante compliance aspect (a firm needs to document X, Y and prove and register Z) and ex post aspect (individuals can sue and judges can assign damages for failing to comply). The OP has a knee-jerk reaction to a list of techniques that might lead to discriminatory practices.

GDPR isn't a bad law in my book. Cookie walls are both cargo cults and a function of disfunctional tracking practices in the market. Lack of enforcement is key in not recognising the value of the law. If compliance with GDPR would be a board level concern we'd be on another internet/ in another world by now.


> If compliance with GDPR would be a board level concern we'd be on another internet/ in another world by now.

That's the point, vast majority of people do not care about the issue GDPR tries to solve, and legislators do not understand enough to create legislation that would work, which makes their efforts useless at best, and usually harmful.

It would have been better to leave the issue alone until enough cases would have been accumulated from people trying to sue companies based on concrete cases.


EU - bless their souls - as soon as something interesting comes along, they immediately feel the need to regulate it globally, and often, very badly. GDPR-AI here we come.


Berlin tried to "regulate" rent prices and failed spectacularly. Created 2 markets and prices shot through the roof (Source: I live in Berlin)

Eu tried to regulate advertising and it failed spectaculary, leading to nothing but extra annoying pop-ups (Source: Worked for an online advertising company, I know how much we tracked despite those annoying popup banners)

EU doesn't understand that regulation != solving the problem. When the actual law is written down, there will be enough corporate sponsored loopholes that will simply make it harder for startups and new upstarts to displace the incumbents.


Pointing out what you see as bad regulation, doesn’t imply that all regulation is inherently bad, or that less regulation is inherently good.

I won’t argue about the merits of each of the above cases you’ve cited (Berlin rents & GDPR), as I think they’re quite complex conversations to have and they could both take hours.

As it relates to the linked article, “AI” will continue to have an impact on our lives, more so than it does today. Whether or not this legislation is good or not, doesn’t preclude the idea that this space will require guards to ensure that citizens are being treated fairly.

Black boxes are not a good way to run a free and fair society, and any being introduced should be met with deep care and skepticism.

Whether the EU will do a “good” law on this is yet to be seen, but frankly I’d much rather it get looked into during the relatively nascent stages, rather than letting these systems loose on everything and then cleaning up the mess afterwards.


When you can't even define `AI` clearly, how do you expect to understand it and regulate it ?

The draft law says any logic or knowledge based systems - This could mean anything from an if-else switch to a neural network.

If waiting till black box AI creates a mess and cleaning it up later is immoral, than so is prematurely killing something before you even get to know what it is.

Regulation has to be clear, simple and easy to understand. This proposal is none of the above. Given the loopholes that Berlin govt. introduced in the law (which btw, got through as it was deemed unconstitutional), even after playing a long, big PR campaign, about combating housing issues, I have no doubts that any such law regulating "AI" will turn into a similar sham.


As I said, these systems should be met with skepticism. I would rather someone write a draft law that starts a conversation, rather than just let these systems run amok.


Pointing out what you see as bad regulation, doesn’t imply that all regulation is inherently bad, or that less regulation is inherently good.

Well, this is a post about Bayesian inference, so technically pointing out bad regulation should cause people to update their beliefs by adjusting the prior probability that any new piece of regulation will be bad. Assuming people reason based on experience of course, which is reasonable.

What we have here is an unclear and highly controversial problem that many people would argue doesn't even exist at all (I don't see anyone in my own life who has been harmed by AI for example), a very vague and poorly worded regulation, which nonetheless has massive fines attached to it. That makes it pretty much a textbook example of bad regulation. And unfortunately this is the latest in a series of such anti-technology regulations from the EU, which doesn't seem to be learning how to write higher quality regulation or how to judge proportionality.


The EU as a bloc will presumably have a less dynamic AI sector as a result I suppose. I’m fine with that personally, but perhaps I’m a bit of a luddite on this issue. I just believe we should cast a very close watch on any algorithm that could make decisions about citizens. If anything, the current laws do not go far enough in my opinion, but that’s another conversation entirely.


It just means that the status-quo will be re-inforced: everyone uses AI, and it's American or Chinese AI. These kinds of laws don't actually change consumer behavior, because they don't reflect anyone's real concerns outside of Guardian op-ed pages and maybe HN. They just allow EU Commissioners to posture and try to spin weakness as a moral virtue, resulting in the EU falling ever further behind.


Then I suppose the EU will be punished for this in the global markets. I’m not interested in reading grand motives into these things. It’s easily done the other way around, and doesn’t add much to a discussion.

The man on the clapham omnibus might not care about these laws, but if that was the standard for every bit of legislation, we’d have a very different set of laws in our respective nations.


This is even worse than trying to regulate cryptography. The bright side is that this will probably be even less successful than past attempts to regulate cryptography, too.


Or knives? Knives are not banned in the EU but trying to stab people with them usually is. Yet most people don't seem to have a problem with the criminalization of stabbing people with knives.


Carrying a knife on your person however, that is banned in a lot of EU countries. Exceptions being you need it for work, or the knife needs to be transported from one place to another.


Define "a lot". I'm aware only of the UK. In my country you can walk with a sword in public and nobody bats an eye. You just can't swing it at people.



This link also indicates:

>Carrying regularly requires a "justified reason" or a "legitimate purpose"

That was kind of the point OP was making right? Only certain uses of knives are prohibited.


It's pretty misleading to say "certain uses" when it includes the default. In the list of reasons you might have a knife, the blatantly good reasons are allowed, the blatantly bad reasons are disallowed, and everything else is disallowed.


Needing to have a legitimate purpose / justified reason is a higher bar than "not swinging at people". It means if you're a gardener going to work cutting down bushes it's ok to carry a machete, but if you're just an ass carrying a katana in your back just for the looks, "but I wasn't swinging it at anyone" won't get you out of jail


Cool beans, so it's roughly half of them or so. Nevertheless it still seems to be a ban not on knives but on some of their uses, such as using them against people or planning to do so.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: