Talent being siphoned off into industry is an understandable and interesting issue. I didn't expect the shift to ethics at the end. I can't help but snort when someone from an advertising company talks about ethics.
You're part of a machine making billions off of exploiting human psychology to push products and get people to click ads. She's maybe not wrong about the issues with AI.. but there are way bigger problems and lower hanging fruit than vague biases in machine models. It's like worrying about if the coffee beans in Google's coffee machines are ethically sourced
At the end of the day, as exemplified by ad companies, people are gunna go make money ethically or not. If you could legally turn babies into cheeseburgers someone will do it. Engineers are not the right people to deal with ethics. That's what laws and regulations are for. You don't just hope and pray the engineers are nice people. That's a naiive way to operate society
> Engineers are not the right people to deal with ethics.
I think that engineers (and everyone in every field) are the right people to deal with ethics. Pretty much everything anyone does has an ethical component to it, and it's a mistake to ignore that. It's an even bigger mistake to farm thinking about ethics out to specialists.
Laws and regulations are critically important as well, but they don't define ethics. They are informed by ethics. That's why it's false to think that if something is legal, that means it's ethical and vice versa.
Yes, the irony is that corporations (and all other collectives) already represent AGI in that they can pass all the same tests and assume all the same roles as any human, when given the same kind of accomodations we give to these technological systems.
And we've already let those AGI align themselves predominantly on "profit"
In their worry about a new AGI that they may never actually discover, the apocolypts are being stymied by the one that's already there and that's already working to take over the world.
I wonder how long before we're hearing about egregores in common parlance. Corporations are already acknowledged as having freedom of speech in America, that's half way there.
That's the definition being used in AI research. Does this system succeed on this test of {reasoning, theory of mind, abstract thinking, ... etc}? Can this system outperform most human workers?
And yes, corporations and other collectives can fully satisfy both criteria for most economically valuable tasks. If you make the same degree of input/output adaptations for them that you do for LLM's, they'll write you articulate letters all day, perform inhuman computations and reasoning tasks, develop and execute grand strategy, control all your robots, ace all your tests, and then write all your code for a tenth of what it would cost with your Bay Area tech bro.
From OpenAI's charter:
> OpenAI’s mission is to ensure that artificial general intelligence (AGI)—by which we mean highly autonomous systems that outperform humans at most economically valuable work—benefits all of humanity.
I think the general public understand AGI as a conscious mind that is also intelligent in the AI researcher sense, like what you might see in movie. Lots of hype comes from this terminology gap.
> engineers have failed to communicate how picking the ethical options can create value for key stakeholders.
Does the primary responsibility for that really fall on engineers in the first place?
It just feels a little victim blame-y to me. Shouldn't most of the culpability fall on those stakeholders who don't want to be convinced because it means lower feature-velocity or less investor-profit?
Primary? Debatable. But you do have responsibility for what you build, who you build it for and what and who you enable with your expertise. And engineers do have plenty of that, so they have plenty of reponsibility too.
Oh, everyone has some contributory involvement, but I find it hard to imagine the same kind of standard being widely applied to other employee-experts in other industries.
I mean, do we blame food chemists for "failing to convince stakeholders" that the candy-company should use ethically sourced fair-trade chocolate and less sugar? Or mechanical engineers for "failing to convince stakeholders" that the oil-company should avoid drilling in areas that are environmentally-sensitive or controlled by warlords of questionable legitimacy?
No, we recognize that a laborer having specialized-skills doesn't automatically mean the corporate-machine is interested in their other opinions. (Unless they're visibly documenting problems that could someday harm the company in a potential lawsuit, in which case they tend to get laid off.)
picking the ethical option will not create financial value for key stakeholders. the metrics are misaligned. smart and wise people spent 2000 years seeking an workaround for this problem. there is none. there are only convoluted excuses.
If they cared, they could sit down with the engineers and make sure they understand, or have one of their underlings do that and make a summary for them.
They generally don't care about the opinions of engineers that much, except to estimate the cost and practicality of different ideas.
a) I don't think CEOs of multi-billion companies care about the opinions of engineers in general let alone the morality of ethics versus revenue.
b) CEOs typically will do whatever it takes to maximise revenue, past the point of what is ethical/moral, up until they hit the limit of whether it's illegal.
And then most but not all will stop. I'm hopeful but doubtful that AI will be any different.
The thing I always find strange in these discussions is that they seem to frame people who chase "money, power, fame, success, ego etc." as not acting ethically.
It's interesting I think because all of the above have ethics in my experience. What they don't do is treat their ethics with the same sense of supremism as the vocal activists/advocates who frame everything unaligned with their view critically. And if you show even a little compromise that too gets treated as a lack of ethics in that people will at the very least view it as the money winning out over personal beliefs.
The idea that someone can be legitimately ethical without being an extremist about the matter is largely unacknowledged by the vocal ethicists. But in a way that also makes sense because from their view the silence is probably just more room to criticize in the conversation.
Beautifully written, heart-and-mind felt, but already reads like a belated plea to bring humanity to computer science, AI research, and big business. All when liberal art universities are losing their liberal arts. One cannot really disagree (I cannot anyway), but what pragmatic actions can one take—-especially after moving from Stanford to Google. Yes, Li has been in and fed the belly of the beast. Now what?
I understand that she only worked at Google during a sabbatical from Stanford (Jan '17 to Sept '18) and that she's back at Stanford now. [1]
It's a little bit confusing with her writing in the present tense about being at Google but I think that's because this article is composed of excerpts from her recent book, so they're excerpting a bit from that time period.
Are liberal arts universities really losing their liberal arts though? The emphasis and core material of a liberal arts education has shifted a bit. But considering how ubiquitous Critical Theory and its components have managed to become in everything from academia to public discourse to institutional policy-setting I don't think it's accurate at all to refer to the liberal arts as in decline. Maybe classical liberal arts but modernized liberal arts are at the heart of most politics right now.
I'm listening to her book The worlds i see: curiosity, exploration, and discovery at the dawn of ai. So right now, I'm just in the middle. She's Starting from physics, but eventually found something interesting internal world for a person, which lead to the road to artificial intelligence. I have pretty much the same experience like her. Starting young, falling in love with physics as I get older. I also noticed a huge motivation to learn Myself. The thing in my head, how it works, how it behaves, how I react to certain external world stimulus. Yeah I'm still in the process to to her book and try to understanding of this new technology and immerse myself in this new things and try to do some interesting stuff.
> I’d seen the consequences of that over and over: brilliant technologists who could build just about anything but who stared blankly when the question of the ethics of their work was broached.
This is something I've seen as well, in contexts much more remote from the nerve centers like Google or even Stanford. It's deeply disturbing.
“ Whatever academics like me thought artificial intelligence was, or what it might become, one thing is now undeniable: It is no longer ours to control.”
I have no doubt Sam Altman thought he was in control until he was told to go…by humans…
NO. we repeatedly deny what our neurons demand, E.G. lust, violence etc. Granted, we end up falling for these impulses many a times, but we also deny them many times. The former occurs more with those who identify themselves with their impulses (desires); as one evolves one's mind and intelligence, one is able to resist and have a much better control on these impulses.
AI as it stands only represents human brain; the mind is much bigger than brain with multitude of functions and don't get me started on consciousness, which is the chief ruler of all and beyond mind, space and time https://www.youtube.com/watch?v=S57DSgRWBRM
the best, and most useful, programs and algorithms are ones that you download off a site that hasn't been updated in years, asks no money, consists of a single .exe, and does a single job.
I've been thinking a lot about tools, recently. About how much things should be designed to be predictable, narrow in scope, and simple. Predictable in particular meaning not only determinative, but also meaning with no edge cases (so they can be simplified as a mutation rather than having to think about the why of how they work).
dang, I see you've changed the submission title back to the article title.
The thinking behind going with my submission title :
> Fei-Fei Li wants AI reimagined from the ground up as a human-centered practice
was that the article title was too vague. The 'North Star' she is referencing is explained in the last paragraph of the article, which is what I used to compose my submission title :
> This, collectively, is the next North Star: reimagining AI from the ground up as a human-centered practice. I don’t see it as a change in the journey’s direction so much as a broadening of its scope. AI must become as committed to humanity as it’s always been to science. It should remain collaborative and deferential in the best academic tradition, but unafraid to confront the real world. Starlight, after all, is manifold. Its white glow, once unraveled, reveals every color that can be seen.
You're part of a machine making billions off of exploiting human psychology to push products and get people to click ads. She's maybe not wrong about the issues with AI.. but there are way bigger problems and lower hanging fruit than vague biases in machine models. It's like worrying about if the coffee beans in Google's coffee machines are ethically sourced
At the end of the day, as exemplified by ad companies, people are gunna go make money ethically or not. If you could legally turn babies into cheeseburgers someone will do it. Engineers are not the right people to deal with ethics. That's what laws and regulations are for. You don't just hope and pray the engineers are nice people. That's a naiive way to operate society