> so we can likely just train only on properly sourced data in some way
But are we? I _can_ source chocolate and coffee from sources that don't use slave labor, but it becomes very important whether you _are_. You can't just shove the moral implications under the rug because an ethical option _exists_. Obviously this is less extreme than my chocolate/coffee example, but considering this is kinda the point of the questions being asked.
3 & 4 & 5 you just completely disregard as issues?!
"3 is not a problem given we do <highly controversial X>" - but what if we don't?
"4 is not a problem because we'll just get better" - but what about now when it isn't and what if we don't?
"5 is not a problem - just make X available and be rich"
3. So almost all other software work. Software engineering is, fundamentally, about replacing people.
4. Is plain bullshit concern, really nothing to address here. People also always hallucinate, in the same sense LLMs do, and for the same reason.
5. Same could be said about trains or airplanes - or most of every item one has or sees in a 21st century urban setting.
Sure, those are all "but X is worse!" arguments, but they make sense because there's no reason to single out AI for those reasons and stay ethically consistent without giving up on almost everything else in your life.
I didn't downvote you; I think your response is a good road to walk down here.
Replacing people:
There's probably a lot of nuance here, but I would rephrase: tech is about empowering people to do something else. There's this TED talk guy with good graphs from years ago and I was watching one of his subsequent, less famous talks where he extols the washing machine as being uniquely liberating for women practically everywhere. But of course this doesn't always happen. We've also come up with very good extractive tech, which has the opposite effect in developing countries with a single valuable resource--maybe someone would like to be a software engineer or whatever, but the only reasonable thing to do in terms of economics, opportunities, or networks is join the local cobalt mine.
So, I wouldn't say software engineering is about replacing people. It's letting them do something else. Whether or not that something else is better is a societal problem. Also, scale matters. Did email and calendaring software replace lots of assistants? For sure. Were assistants the entire world economy? No. Put these two things together, and you start to see where AI companies are running aground. They're not empowering people, rather they're empowering companies to replace them. It'd be one thing if Claude were like, "hey software engineers, just flip me on, pay me $200/mo, and I'll earn you $300k/yr". That's not the pitch. Second, it's not just software engineers, or artists, or whatever. It's everyone. Again if the idea were "wow robots will do all the crap in our lives and we'll just work on our novels or weird hobby programming languages" OK, but again that's not at all the pitch.
Hallucinating:
Sure people hallucinate (I'm gonna say "are fallible" or "make mistakes" when it comes to people), but we expect them to. That's why someone doesn't just say, "hey camgunz, build a website" or if they do say that I don't just crank out the 1st draft without even loading it up and say, "OK, here is your website". Software engineering is programming over time, so we have processes like code review, CI/CD, testing, specifications, design, blah blah blah, because humans make mistakes. When I build websites I do as much of that myself as possible, as well as other quality assurance stuff, because I make mistakes.
But the pitch with AI is, "just dump stuff in the text box and poof", and somewhere in tiny font or buried in fine print is, "sometimes we goof, you should check things". But that's completely antithetical to the product because the more work you ask the AI to do, the more you have to check. If I'm just like "hey write me a fast inverse square root function" I have like 12 lines to check. If I'm like, "hey could you build me a distributed key value store" imagine the code I have to review and the concepts I have to understand (C++?) in order to really check the work.
But that can be fine! In the hypothetical "two Rust engineers + Claude build a distributed key value store" that's a big productivity win. It seems totally fine to use AI to do things you could do yourself, only way faster. But again that's not the pitch, it's, "this will let you do things you could never yourself do, like build a distributed key value store without learning systems programming; fire all your engineers".
Concentration of power:
Yes but we have lots of regulation--national and international--about this, because without it things were very bad (the industrial revolution was no picnic).
---
> Sure, those are all "but X is worse!" arguments, but they make sense because there's no reason to single out AI for those reasons and stay ethically consistent without giving up on almost everything else in your life.
Again scale matters. There's a clear difference between word processing software replacing typists and AI software replacing everyone working in a cognitive profession.
When I think about it I like to swap in physical professions, like the automating of factory work or trades work. Automating factory work was pretty bad; sure prices went down, but we disempowered an entire class of people and that's had pretty negative consequences.
Automating trades work has been really good though, and IMO the difference is that the benefits from automating factory work went to factory owners rather than workers, whereas benefits from automating trades work went to tradespeople. I think 90% of my issues with AI would go away if we said something like companies with > $2M ARR are barred from using it, rather their employees can use it and cannot be discriminated against for doing so, can't be required to reveal that they're using it, etc.
---
Finally, a lot of armchair AI analysis (this isn't disparaging; I'm in this crew) is like at the level of economic widgets and simple graphs or whatever. The pragmatic vision for SWEs is basically we all use some AI assistant and we're 10x more productive, even though we have to check the work of AI basically constantly.
But if software engineering becomes "AI output verification", I won't choose to be a software engineer anymore, because that's not the fun part. I don't know how many people will want to be AI output verifiers. The level of social change this threatens is monumental; one starts imagining a world where people just kind of lounge in sunny parks pursuing their dreams, but in truth I think the future is closer to us just reading reams of AI-generated whatever checking it for errors. Sure maybe I'd like to be a software engineer or a playwright, but the only economically reasonable thing for me to do is just read AI-generated React code. Pretty grim.
But are we? I _can_ source chocolate and coffee from sources that don't use slave labor, but it becomes very important whether you _are_. You can't just shove the moral implications under the rug because an ethical option _exists_. Obviously this is less extreme than my chocolate/coffee example, but considering this is kinda the point of the questions being asked.
3 & 4 & 5 you just completely disregard as issues?!
"3 is not a problem given we do <highly controversial X>" - but what if we don't?
"4 is not a problem because we'll just get better" - but what about now when it isn't and what if we don't?
"5 is not a problem - just make X available and be rich"