This community seems somewhat divided on AI ethics, I wanted to clarify, so the question is simple:
If it turns out that AI is not capable enough to replace anyone - not artists or developers, not customer service, not doctors or lawyers - but has the capacity to transform our work and process such that we produce significantly better outputs and deliverables, is it still a threat?
> It samples from creators without their permission
What if it's deemed that studying moves is not the same as flipping material? Or if lines are drawn that clarify what copyright is/isn't?
> It leaks info
What if guardrails are implementable so it doesn't?
> It could kill the world
What if there's no evidence or any possible technical route for this to even be close to possible let alone likely?
Does it change your opinion?
Can you see it as generative of jobs and progress as it is of pixels and words?
What are your real concerns - I'm looking for the best possible takes on the potential good or bad.
If I write something, the less thing I want is AI to rewrite it. An editor (any human) will tell you why you need to rephase a sentece or eliminate a paragraph, and in the process you grow. I don’t want it to summarize for me, because I won’t know what it threw away. I don’t want it to generate pictures, because there’s no meaning in them. I don’t want it to generate code, because we could do with less code in software (Why a text editor needs to include a whole browser with it?).
Anything we do as human is a series of composable steps, each with its purpose, and mastery is the ability to execute them unconsciously. Get AI to do it and you’ve lost both meaning and mastery.