Hacker News new | past | comments | ask | show | jobs | submit login

State of the art NLP AI models actually have trouble figuring out text posts with explicit racism and hate speech, but people are trying to say that it can pick up subtle language clues from CEOs?

Consider me a skeptic.

AI NLP models have way too many false positives and negatives for this to be workable. Maybe in 10 years, but definitely not now.




> State of the art NLP AI models actually have trouble figuring out text posts with explicit racism and hate speech [...] Consider me a skeptic.

Perhaps the reason why detecting rascism or hate speech is so hard for an AI is that what is considered racism or hate speech is a moving target.


Just because NLP hasn't solved all problems doesn't mean it can't solve some problems. And some problems of seeming complexity may turn out to be more shallow than others that initially appear straightforward.

Off the top of my head, racism is often so casual that stochastic AI models may have difficulty discerning a difference in syntactic structure or other linguistic features compared to a similarly casual statements of a non-racist nature.


You don't need to be able to discriminate well, you just need a small non-consensus signal that may not even be perceptible to humans.

But you're right to be sceptical. Hedge funds are largely a sales job. How can you convince gullible investors to overlook the terrible performance of your sector and invest in you anyways, and then take a large slice of the profits when you get lucky? Buzzwords and neat sounding strategies help.


My favorite example...

"...we went from a negative growth in Q4 in storage revenue on a year-over-year basis. We now have flat..."

"DELL – Q1 2022 Dell Technologies Inc Earnings Call"

https://investors.delltechnologies.com/static-files/d70442f3...


I'd suggest it really depends on the exact problem space you are working on.

Some NLP related problems can be solved at a similar level of quality of humans - (humans actually make a lot mistakes as well).


> State of the art NLP AI models actually have trouble figuring out text posts with explicit racism and hate speech [...]

This is just wrong. State of the art NLP models are pertectly able to identify racism and hate speech if they are trained to do so.

The issue is just that most of the times political correctness is not in the training objective. In fact, general language models are trained to reproduce what they read as closely as possible, hence the racism/hate speech.

Again, the problem is how they are trained, not what they could/could not achieve.

> AI NLP models have way too many false positives and negatives for this to be workable.

Citation needed.


On the other hand you have access to probably hours of talks of C executives and them expressing different emotions less or more subtly

why wouldnt it be possible to train on it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: