Hacker News new | past | comments | ask | show | jobs | submit login

> if you're saying i'm a racist

I wasn't, sorry if that's what you interpreted.

> The status quo however should be changed by people,not people with machines

I'm not sure I understand what this means. People/companies own machines (and ML models) and use them. So shouldn't we make sure that the machines' decisions align with what people/companies _want_ them to do? (i.e. that the people's ethics align with the ML model's ethical consequences; I'm 100% sure that people who deploy "racist models" don't do it on purpose or out of malice)

> You can't call something ethical just because you think it should be; it must be argued out

On one hand this sounds like a strawman. No one thinks that something is ethical because someone randomly declared it so.

On the other hand... ethics are a human construct, and will continue to evolve as our culture evolves over decades and centuries. Shouldn't we construct ML models which are flexible in that they can align themselves with the ethics we collectively decide? We don't know how to do that yet!

> Using AI to shut out some more of that argument will only create a universal standard, not necessarily the correct one.

You seem to be under the impression that the fields of AI ethics is dedicated to brainwashing people into some particular unpopular moral philosophy. This is simply untrue. Within the field of AI ethics there is a lot of diversity of thought and disagreement on how human morals should be "encoded" so that AI can "align" with these morals. And I'm using the plural of morals because obviously there will never be a humanity-wide consensus on ethics, and if AI is to be deployed in the world it needs to reflect this diversity.

Here's an example of AI people disagreeing if you don't believe me: https://jacobbuckman.com/2021-02-15-fair-ml-tools-require-pr...




Ethics and morals don't exist. Full stop.

Codifying ethics perpetuates falsehood. Every single generation in history believed that the "had it" only to be denigrated as hopelessly misguided by the next generation. We are making the same mistake, only less people are killed right now, so it looks like we are more successful. Remember the Pacifism of the inter-war years? it bred fascism. The pendulum swings.

AI ethics cannot hope to remain in style for long, while they will almost certainly exist for far too long. Accepted standards of 2 years ago, are already out of date.

I'm lost for a solution.

I do think that the less AI is claimed to be ethical, the less it will be trusted, which is the best cure i can think of. Honesty is the basis of the whole of scientific inquiry, and is probably scarcer in google's ethics research department than anywhere else in the building. (programs don't run if the math's wrong, economics as well)


No, you're wrong. Ethics and morals do exist. Money exists. Ideas exist in our brain, functionally.

Are all these things _ideas_? Human creations? Sure. The universe is absolutely indifferent to us. But these _ideas_ have real-world impact, and I'm not indifferent to my own suffering.

Societies function at the scale they do right now because there is enough overlap in how I perceive the world and how another random human perceives the world so that even though we don't know each other, we can still cooperate [see e.g. 1 for great discussions on this] e.g. exchange money for goods.

[1] https://www.preposterousuniverse.com/podcast/2021/02/01/132-...

> AI ethics cannot hope to remain in style for long

Again, you seem to be conflating "AI ethics" with a particular ethical stance, let's call it woke humanism, and you seem to think that the people who work on AI ethics work to enforce this belief on others. This is wrong. We're perfectly aware that humans have a variety of ethical preferences, see my previous post. Lots of people who work in "AI ethics" are definitely not woke humanists.

> Accepted standards of 2 years ago, are already out of date.

I'm not sure what you're trying to say here. Um, sure we keep finding better algorithms... no one ever, ever, ever, has claimed that their paper is the ultimate algorithm and no no one will find better. But 2 years ago, killing a random person in the street was wrong. It's still wrong today, it was wrong 2000 years ago, and it's going to stay this way for the foreseeable future.

> I'm lost for a solution

The research field of AI ethics exists because we don't know what the solution is!!! Come join us if you're so concerned.

> Honesty is the basis of the whole of scientific inquiry

If you value honesty, then you should value research that tries to make ML models "honest", by revealing how they make the predictions they do and where that fails. I don't understand your antagonism towards ML FATE (fairness accountability transparency and ethics) research


my points still stand. History has shown us that defining ethics and writing them down merely spreads falsehood. I don't think we are actually better off morally now then 50 years ago. In some areas yes, in others we have regressed. (look at addiction rates now) Now to categorically state that ai ethics is as diverse as "traditional" ethics can't be true by definition. (mine, if you ask)

As for killing random people on the street never being socially acceptable... ...Just look at europe 80 years ago.

AI Ethical research can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.

What about far more stable principles, such as murder and racism, you ask?

They are prone to being overplayed or downplayed, are state executions murder, or justice? What if the victim/ hangman happens to be black? Why should it matter? Just ignore some issues? That's misleading by omission.

It would be better to just admit "yes, we at Giant Tech know our ethics are bs, but we had to put something down or our machines won't work. Maybe we are not ready for advanced ai. Maybe there's a limit on what programmers can do. Yea. We know. turns out computers DO have limits. We'll have to find other ways to make money."

"But why should we say that!" cry all the executives when this speech is proposed to them.

"because it's true, and if we don't act now the company is screwed. And our clients will also get screwed" is the answer of the timid executive who first suggested this.

"how will the truth help us?" respond all the execs in unision.

"if we can keep up the lie for long enough, we shall all long be millionaires and retired before it implodes! We shall long be out of danger! who cares if some people lose money?"

"Yes, but don't you feel bad for all the shareholders? And how can we possibly fool people for decades to come that our ai isn't bs?" Responds the poor executive weakly.

"By creating a fake team and telling everyone they are ethics researchers" they say. "Really they are just pawns to help us earn more money. Fish get eaten by bigger fish you know?"

"can you just help me change a few lines on my press statement?" Asks the first executive.


I think you're missing the point I'm trying to make, which is that developing "fair" algorithms, is not about developing algs that are e.g. pro-white-black equality, it is about developing algs that have option of equality built-in. It is then up to the user of the algorithm (you, Google, whoever), to "input" what or who should be equal.

It just so happens that currently the "input" is racial and gender equality. That's a societal choice, and one that is likely to change if e.g. racial equality is achieved and some new inequality arises. Maybe eye-color-based discrimination, who knows.

More generally than "equality", AI Ethics research gives us tools to analyze current methods and see where they fail to meet our ethical standards.

> History has shown us that defining ethics and writing them down merely spreads falsehood

Humans have been trying to improve their own condition for as long as there have been humans. Collectively defining acceptable behaviors is a never-ending task. Does that mean we should not undertake it? Absolutely not!

Writing down ethics isn't about spreading falsehoods, it's about cooperation. Cooperation involves compromise:

> AI Ethical research can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.

Laws can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.

Culture can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.

Morals can never be an accurate representation of the majority of humanity - or even the majority of its users! If it can it is not sustainable for any long period of time.

Do you see the pattern? Things change, that's normal. We still have laws, and culture and morals, but we adapt them to our needs. Are you suggesting we should simply reject anything that changes? You won't be left with much.

I'm still very much interested in improving my own condition. That includes pushing people to behave in ways which I think would do that. People have different interests and their condition is often at odds with other people's condition. This is the foundational difficulty of living in a society of more than 1 individual. Yet we 8 billion humans still manage to be fairly successful at it. I wonder why?

Cultures and morals change. Does that make the morals of the past falsehoods? Of course not. They're just different perspectives on the human condition, probably best suited to the material conditions of the past.

Calling someone today is often seen as rude when a text would suffice. This is due to our material conditions, the ubiquity of cellphones.

> It would be better to just admit ...

You're suggesting we should admit defeat? Give up and let Google maximize profit? AI is a wonderful tool that could improve the material conditions of most of humanity if used correctly. It could also be devastating. I'd rather it not be devastating, so I'm going to continue supporting people who try to do research into aligning AI with whatever ethics we collectively agree on.

> I don't think we are actually better off morally now then 50 years ago

This is pretty sad.

Don't confuse your own cynicism vis-à-vis big tech with some nihilistic historical inevitability. The global improvement of the material conditions of people in the last 50 years have enabled us to start asking for ourselves what morals we actually want on a global scale, rather than this exercise being left solely to a self-interested elite.

Regardless of how better off morally _you_ think we are or aren't now, the space of collective possibilities is now immensely larger, whether you like it or not. That, is wonderful.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: