Fair enough. To be clear, I'm not saying ML bias isn't a significant problem; and certainly I'll continue to address it as the project grows. Based on this, it sounds like we might be in agreement mostly:
> But I also think bias, or echo-chamber, isn't unique to ML since we see it so much in the world and institutions that surround us, but we are in a unique position to address these problems directly on the systems we create.
I often run into people (usually not ML practitioners) who appear to think that ML inherently results in filter bubbles/bias (as opposed to specific ML implementations) and thus think the entire approach should be abandoned; whereas I think ML, with a proper focus on reducing bias, is one of our most promising options.
Fair enough. To be clear, I'm not saying ML bias isn't a significant problem; and certainly I'll continue to address it as the project grows. Based on this, it sounds like we might be in agreement mostly:
> But I also think bias, or echo-chamber, isn't unique to ML since we see it so much in the world and institutions that surround us, but we are in a unique position to address these problems directly on the systems we create.
I often run into people (usually not ML practitioners) who appear to think that ML inherently results in filter bubbles/bias (as opposed to specific ML implementations) and thus think the entire approach should be abandoned; whereas I think ML, with a proper focus on reducing bias, is one of our most promising options.