State of the art NLP AI models actually have trouble figuring out text posts with explicit racism and hate speech, but people are trying to say that it can pick up subtle language clues from CEOs?
Consider me a skeptic.
AI NLP models have way too many false positives and negatives for this to be workable. Maybe in 10 years, but definitely not now.
Just because NLP hasn't solved all problems doesn't mean it can't solve some problems. And some problems of seeming complexity may turn out to be more shallow than others that initially appear straightforward.
Off the top of my head, racism is often so casual that stochastic AI models may have difficulty discerning a difference in syntactic structure or other linguistic features compared to a similarly casual statements of a non-racist nature.
You don't need to be able to discriminate well, you just need a small non-consensus signal that may not even be perceptible to humans.
But you're right to be sceptical. Hedge funds are largely a sales job. How can you convince gullible investors to overlook the terrible performance of your sector and invest in you anyways, and then take a large slice of the profits when you get lucky? Buzzwords and neat sounding strategies help.
> State of the art NLP AI models actually have trouble figuring out text posts with explicit racism and hate speech [...]
This is just wrong. State of the art NLP models are pertectly able to identify racism and hate speech if they are trained to do so.
The issue is just that most of the times political correctness is not in the training objective. In fact, general language models are trained to reproduce what they read as closely as possible, hence the racism/hate speech.
Again, the problem is how they are trained, not what they could/could not achieve.
> AI NLP models have way too many false positives and negatives for this to be workable.
Consider me a skeptic.
AI NLP models have way too many false positives and negatives for this to be workable. Maybe in 10 years, but definitely not now.