Hacker News new | past | comments | ask | show | jobs | submit login
ACM, Ethics, and Corporate Behavior (acm.org)
65 points by pabs3 on Feb 26, 2022 | hide | past | favorite | 33 comments



When the ACM code of ethics was being discussed for adoption it was ensured to be toothless so as not to alienate large corporations and China, which would have been bad for the organization in various respects.


Do you have a cite for this? I'm interested in studying that.


I wonder how the ACM's composition affects their stance on this. If most of ACM's influential participants come from academe, it costs little to label advertising as unethical. Now if we were to discuss the exploitation of naive graduate students, then there might be a little more back and forth.

The whole notion of calling a given thing unethical in modern discourse is interesting to me, because there is not just one ethics. A person could reasonably reflect on personalized advertising and ask: "what concrete harms have been done?" I think it would be hard to point out much that is unambiguous. People like to bring up Cambridge Analytica or social media misinformation, but neither case is necessarily advertising related. You could equally well argue those cases arise from Facebook's and Twitter's reluctance to embrace information paternalism.

I think it's fair for someone to comment on social issues without "removing the log from their own eye," because what each of us see as ethical issues with other actors in the social space differs. Typically we have no problem with our own actions and abhor anything that otherwise inconveniences us. But that also makes me question whether ethics is even a real thing, or whether it's just a rhetorical cudgel with which we try to beat other people into our preferred behavior.


> The whole notion of calling a given thing unethical in modern discourse is interesting to me, because there is not just one ethics. A person could reasonably reflect on personalized advertising and ask: "what concrete harms have been done?"

Most ethical systems consider performing actions with a lack of transparency and consent to be unethical. Often even if there is a benefit to the subject.


Ethics is philosophically a real thing -- but it's not a real thing as a system of governance.

I think "business ethics" has become so prominent, only because it enables unethical people a means to subjugate a populous of mostly ethical people. If you think that's too harsh, consider that traditional ethics is usually in polar opposition to the Reaganite imperative that the primary responsibility of businesses is to make money for their shareholders.

Clearly, if you believe that surveillance capitalism is unethical -- then you find yourself in between two opposing ethical definitions. Now your only choice is which one to pick, make money or not.

What kind of business leaders emerge from this?


I don't think I have a broader point here, but I do want to note that ACM is (of course) more significantly than a corporation, a professional association. The idea that professions should have codes of ethics goes back to a couple hundred BC, right? Of course, just about as old is the idea of professionals violating those codes for $$$.

> A person could reasonably reflect on personalized advertising and ask: "what concrete harms have been done?" I think it would be hard to point out much that is unambiguous. People like to bring up Cambridge Analytica or social media misinformation, but neither case is necessarily advertising related.

IMO one aspect of being an ethical person is considering ambiguity to be resolved in the 'bad' direction. This shouldn't terminate thought on the topic, just lead to further investigation by those who want to do something ad related. Are there ambiguous harms? (I haven't thought much about the underlying morality of ads themselves -- I find them annoying as a consumer but they aren't my field).


As long as the ACM continues using copyright to restrict access to the intellectual heritage of humanity, they are in no position to comment on the ethics of corporate behavior.

In this article, Vardi says, "I have seen practically no serious discussion in the ACM community of its relationship with surveillance-capitalism corporations;" well, that's because people who take ethics seriously avoid participating in the ACM community.


While in general I agree with you about those who abuse copyright, of all organizations that I am aware of, ACM is the one who abuses copyright the least.

Unlike many other organizations, which might ask you for $30 for each research paper of a few pages whom you happen to want to read, for about $200 (membership fee + library fee) you can have access for a year to the entire ACM digital library, where there are at least a few hundreds of thousands of research papers, or maybe millions, including the vast majority of all which were important in the first few decades of computing technology, and which, in my opinion, are still more relevant today than many people who read only the latest papers believe (because many of the latest papers frequently rediscover already known old techniques).


This kind of attitude is detrimental to the overall progress and improvement of ethics in general. Insisting on absolute ideological purity before a person or group is "allowed" (or "in a position to") make any comment on ethics is both hopelessly unrealistic and deeply counterproductive.

Yes, there are ethical stances, and records of conduct, that are so egregious they call into question anything else someone says—but to say that the ACM is "in no position" to call for improved ethics in AI and a move away from surveillance capitalism because of a completely unrelated objection you have to their conduct regarding copyright does no one any service.

Call out their bad behavior, yes—but when they say something that's genuinely positive, acknowledge that. Reward behavior you would like to see repeated, even if it isn't absolutely everything you want or responding to your personal pet peeve.


ACM is moving to an open publishing model. acmQUEUE is open.

(Not speaking for ACM)


I hope so. They own the copyright to a huge amount of their published material, which as adrian_b points out above includes most of the important historical record of the computing field. If they move to an open publishing model, they'd relicense it under Creative Commons Attribution or Creative Commons Attribution Share Alike (CC-BY or CC-BY-SA), which would be an enormous contribution.

However, as far as I know, this isn't even being discussed within the ACM leadership or membership.


You may be right. Certainly purity spirals are enormously caustic.

However, I don't think the copyright problem is completely unrelated to the problem of surveillance capitalism. Elsevier's excuse for embedding tracking cookies in their papers is copyright, and the main thing preventing competitors to YouTube from popping up is also copyright. Google's primary excuse for scanning users' cloud files without their permission is also copyright, although they also have kiddie porn, which is Apple's excuse. Copyright is the excuse for the DRM in Amazon's ebooks, Netflix's movies, and XBox games, all of which form extremely powerful surveillance-capitalism platforms—in large part because the users are prohibited from disabling the surveillance functionality. Copyright is a powerful centralizing force in the software industry; without copyright, ex-Google and ex-Fecebutt employees would have a much easier time founding companies to compete with Google and Fecebutt, which would enable users to choose trustworthy companies instead of companies with government-granted monopolies on the software their employees wrote.

So I think copyright and surveillance capitalism are deeply intertwined, though the ACM is not itself using copyright for surveillance, to my knowledge.


"well, that's because people who take ethics seriously avoid participating in the ACM community."

No true scotsman would participate in the ACM community.


You know what... I think both can be true, and it's good for you to point out the hypocrisy. ACM has been around a long time and they've gotten some stuff right and missed a lot of stuff, but when they come out to talk about ethics, it's time to hold their feet to the fire.


Do you know anybody in the current ACM leadership? Maybe you could talk with them.


He can talk a big game about ethics, for someone representing an organization that is essentially stealing the labour of publicly funded researcher and putting it behind a paywall.

I'm an hypocrite like all the others though, I was a member for a few years (to reduce my conference fees).


> Yet, with the exception of FAccT, I have seen practically no serious discussion in the ACM community of its relationship with surveillance-capitalism corporations. For example, the ACM Turing Award, ACM's highest award, is now accompanied by a prize of US$1 million, supported by Google.

Indeed. Let's take a look at who sponsors the some of the top ACM conferences:

The Web - Facebook[0]

CHI - Facebook[1]

KDD - Facebook[2]

CCS - Facebook[3]

You get the drift...

This one is especially amusing: The AIAA/ACM Conference on Artificial Intelligence, Ethics, and Society - Brought to you by Facebook[4] and Google Deepmind[5].

Not to mention how much money Facebook gives to academic research labs. Funding PhD students and academic research is laudable, but the things they've funded over the last decade have gone directly to improve their surveillance efforts. It's been said that the greatest tragedy in computing during the 2010s was the number of brilliant minds devoted to ranking ads on web pages.

And it's not just that it's a few of them, and that there are enough resources to multitask and research all the things. No, it is a zero sum game. Google, Facebook, Microsoft, Amazon etc. suck up all of the available oxygen in any room. Their concerns are what everyone focuses on, at the exclusion of other things that could serve us better.

[0] https://www.sigsac.org/ccs/CCS2021/index.html

[1] https://kdd.org/kdd2021/sponsors

[2] https://chi2021.acm.org/for-sponsors-exhibitors-and-recruite...

[3] https://www2022.thewebconf.org

[4] https://www.aies-conference.com/2021/

[5] https://www.aies-conference.com/2022/


It seems relatively simple to target behavior and label it 'unethical'.

What is arguably as challenging as crash-proof code itself is the notion of a standard that is simple, complete, agreed-upon, secure, and future-proof enough that at least a large majority of people say: "Yep. Falling in line with that."

I, for one, would like to see the FSF's ideas expand a bit more into hardware and services. Yet there seems no rush. *Why* there is no rush may reveal a bit about the nature of hardware- and service offerings.


Big talk from an organization that not even three years ago had an article with this line: "At my company, ... [w]e, quite literally, think only about ourselves and our personal profit. "


For more context I believe this is the article the quote is from: https://m-cacm.acm.org/magazines/2019/11/240379-the-benefits...

At my company, Zerocracy, we practice a #NoAltruism policy. We, quite literally, think only about ourselves and our personal profit. This might sound a bit harsh. Isn't it better to play nice and try to appease your clients? In an ideal world, maybe. But here's what we have learned about clients: they also practice #NoAltruism.

Clients want to keep costs low, and if they can, they will pass costs onto outside companies. That's why we decided to "get lazy" and only do what we are paid to do. We won't go out of our way to improve a project, refactor, or fix code unless we are getting paid for it.


Words by the CEO of Zerocracy, a Russian company


it is not illegal or unethical to be a Russian; that means doing things that people do, like publish papers or try to be influential to other thinking people. Vilification of an opponent is yet another ugly aspect of this man-made disaster occurring now.


> Big talk from an organization

It's not an organization — it's Moshe Vardi.

Communications is essentially a venue for editorials targeted at members of the ACM. Publications in Communications do not necessarily reflect the values of the ACM itself as an organization. (Publications in Communications are also not peer-reviewed.)

This is evident even just from the text of the linked article, considering the author explicitly calls out the ACM for failing to consider what he believes to be important issues:

> I have seen practically no serious discussion in the ACM community of its relationship with surveillance-capitalism corporations. For example, the ACM Turing Award, ACM's highest award, is now accompanied by a prize of US$1 million, supported by Google.


Blaming technology for societal failures... here we go again. Development of more powerful technology will always require development of responsible regulations. It's not a good argument toward becoming a nation of Luddites, and more importantly, abdicating our regulatory responsibility to the companies who stand to profit from it.

> AI technology is the fundamental technology that underlies "Surveillance Capitalism,"

How about this -- capitalism is the fundamental system of economics that underlies "Surveillance Capitalism".

To the deleted comment:

My point in turning that statement around is that if you see surveillance capitalism as being bad (the author says this plainly), then perhaps the discussion is really about economic policies and not AI.

The author bends over backwards to paint nebulous "AI" and "machine learning" in a bad light over problems that existed long before their arrival. What these more powerful tools have done is to ramp up the problems and put them front and center. But in the end, the economic profitability of surveillance capitalism has never been challenged.

Nothing in this article really has to do with AI at all, and it's confusing why the author spends so much time trying to draw up so much negative sentiment, in order to deflect from the real underlying issues.


> Nothing in this article really has to do with AI at all, and it's confusing why the author spends so much time trying to draw up so much negative sentiment, in order to deflect from the real underlying issues.

I'm not sure if you're referring to Moshe Vardi or the deleted comment with "the author".

The former recognises that he is in a position of influence in "the" CS society and is instructing them to account for ethics better, even if that leads to uncomfortable discussions (and repercussionsin the form of less sponsor money).

The latter: no clue. That comment was deleted, let's stop the discussion there. (If for no other reason as to not exclude those who missed the comment. )


> The former recognises that he is in a position of influence in "the" CS society and is instructing them to account for ethics better, even if that leads to uncomfortable discussions (and repercussionsin the form of less sponsor money).

We probably haven't worked at the same places, but in my experience, unethical decisions aren't made at the bottom. I don't understand the point of the author making a populism-styled appeal here. What are we going to do, ramp up cancel culture? Furrow our brows harder and wag our fingers more sternly? These things aren't working, and if anything, have allowed the actual problem to persist: "surveillance capitalism" is legal and profitable. If the best company doesn't take up the profit, the worst certainly will.

If a business stands to profit by doing unethical things, how can we expect them to act differently if they won't face any legal accountability? Most people in the US believes that it's the primary responsibility of companies to make money for their shareholders by any legal means. If we collectively deem surveillance capitalism to be predatory, then the only answer is to either make it unprofitable, or illegal.

I feel like I should say something about AI, but it adds nothing to this discussion, other than it has made surveillance capitalism more aggressive. We used to have little guns, and now we have bigger ones. This will always be the case as long as technology is pursued.


To add a further reply, in addition to the excellent sibling comment: Indeed, decisions are implemented at the bottom. E.g., when some low-cost airline decided that "random seat assignment" should not include sitting next to each other if you booked more than 1 ticket but didn't pay for assigned seats, someone coded the equivalent of

    WHILE nextTo(seatA, seatB) && (unpaid(seatA) || unpaid(seatB)) {
      reassignseats()
    }
And there are a lot of similar stories (dieselgate comes to mind).

Now I expect the person charged with implementation to push back. Not necessarily to put their job on the line, but at least not to just continue without any comment. As IT professionals, we have a responsibility to society - and this is part of it. (Another part is teaching those that become IT professionals to do this - doing my part!)

I will readily agree that such problems aren't necessarily fixed only by implementors calling out the ethics. But it is the least we can do. Let's at least strive for the bare minimum of social responsibility in our profession. That is not a high bar, but even that would put us ahead of where we are today.


> unethical decisions aren't made at the bottom

But they sure are implemented at the bottom.

This article was published in CACM, whose target audience is CS academics and industry professionals. They are the people actually building and researching these technologies. I have literally heard a researcher in deep fakes say without irony that it's not their role to think about ethics in their research, because it will get done anyway, there the ethics questions should be left to others who are better trained to think about such things. That's sure a position to take, but I would say this article serves to speak in opposition to that kind of view. Moshe Vardi is highly respected and very influential, and he's basically calling out researchers and developers in engaged in this work as unethical:

  Surveillance capitalism is perfectly legal, and enormously profitable, but it is unethical, many people believe, including me
Those are heavy words coming from someone of his import. Personally due to the respect I have for him, if Moshe Vardi said that about my work, I'd maybe think twice about how I was spending my time. Because Lord knows the people doing the work of implementing and maintaining surveillance capitalism have very lucrative options. It's not like there isn't interesting work to do outside of this area. It's not like there isn't important work that needs to be done but can't because "surveillance capitalism" is so profitable that it acts as a black hole of young talent.

And so what this piece is really about is saying that if these people are going to be engaged in arguably unethical work, then maybe we shouldn't be elevating then with recognition within this particular (very influential) industry group. Instead we should call out what they're doing and introspect a little bit about our relationship with them (I say we and our as a member of the ACM, and I agree with what Moshe Vardi wrote here).


The term "surveillance capitalism" was made by one prof (seriously, the book starts with one-pager "The Definition", Sur-veil-lance Cap-i-tal-ism, n. (...)), however it has its own Wikipedia page. On that Wikipedia page, "ethi" is not found.

The way I see it, it won't be common to blanket struck down a Turing award winner for working at Google.


How is it in any way significant that the sequence of letters “ethi” is not present in a Wikipedia article?


In the linked opinion piece, the author calls the surveillance capitalism unethical; the author of Surveillance Capitalism wrote entire book which is more nuanced. Likewise, the field of algorithmic fairness is more detailed than the political scandals that surfaced.


>> It would be extremely difficult to argue that surveillance capitalism supports the public good.

The people in charge have come believe that GDP is somehow a measure of public good. Money can be used as a proxy for a lot of things in this world, but as the saying goes "money can not buy happiness". It can be used to avert a lot of pain, but that's not the same thing.


When somebody drops out of work to pursue a beneficial community passion, does GDP decrease? Economists would disagree...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: