> Yep, the team is gone. The team that was researching and pushing for algorithmic transparency and algorithmic choice. The team that was studying algorithmic amplification. The team that was inventing and building ethical AI tooling and methodologies. All that is gone.
If you want to justify your position, I would suggest highlighting the positive changes you _did_ make, rather than the positive changes you _intended_ to make. If you didn't make any notable positive changes over the time you worked there, that might explain why you no longer do work there.
'Ethical AI' in my opinion is essentially the AI version of 'positive discrimination', i.e. they will keep biasing the inputs and outputs until the algorithm yields the desired result. In this case the desired result in not equality, but "corrected equality". Don't get me wrong, biased and misleading training sets are a problem, but the AI itself is not unethical.
Also I would like to know whether algorithm transparency included blue check-marks (that were obviously biased), or government take-down requests, or the promotion of completely random people for no apparent reason? There was never any intention for algorithm transparency, this is Twitter's 'secret sauce' after all.
> If you want to justify your position, I would suggest highlighting the positive changes you _did_ make, rather than the positive changes you _intended_ to make.
Imagine where the world would be if bell labs didn't invest all that money into positive changes that people _intended_ to make.
The comment you replied to did sound callous given the fact that the author probably just got laid off.
Specifically on your response: while planning specific changes is good and so is attempting to solve hard problems, during mass layoffs the teams with few accomplishments are usually hit first. Ambitious plans seldom help. Mass layoffs are rare, but this is something one should keep in mind. My 2c.
Specifically on Bell Labs, new employees until the 1990s were told "you have a job for life" (paid by the AT&T's monopoly generating huge cash flows). But when the AT&T became less profitable and mass layoffs did start, as far as I know they followed the same pattern.
Yep. Something like "We engage selected ethnic communities for feedback on our algorithms and practices, helping to improve Twitter's social culture" translates to "I meet my ethnic friend for lunch and we hate on internal Twitter politics".
I'm not saying that some of these people are not doing good work, but I think these people need far more criticism than they get.
An AI doesn't have agency in the same way humans do. The AI has zero understanding of holocaust denial, in the same way another tool does, or a dog, or a child.
If you want to justify your position, I would suggest highlighting the positive changes you _did_ make, rather than the positive changes you _intended_ to make. If you didn't make any notable positive changes over the time you worked there, that might explain why you no longer do work there.
'Ethical AI' in my opinion is essentially the AI version of 'positive discrimination', i.e. they will keep biasing the inputs and outputs until the algorithm yields the desired result. In this case the desired result in not equality, but "corrected equality". Don't get me wrong, biased and misleading training sets are a problem, but the AI itself is not unethical.
Also I would like to know whether algorithm transparency included blue check-marks (that were obviously biased), or government take-down requests, or the promotion of completely random people for no apparent reason? There was never any intention for algorithm transparency, this is Twitter's 'secret sauce' after all.