> Members of The Church of Graphs live by one primary commandment: thou shalt not believe your lying eyes.
Ah, how familiar this is from some colleagues in tech.
Demands of evidence are asymmetric: make a bold claim that's aligned with the group, and it slips by; make a hint of a misaligned claim, and you get chided for not being a researcher in the field and spreading misinformation.
> The AI industry is 99% hype; a billion dollar industrial complex to put a price tag on creation. At this point if you believe AI is ‘just a tool’ you’re wilfully ignoring the harm.
> (Regardless, why do I keep being told it’s an ‘extreme’ stance if I decide not to buy something?)
> The 1% utility AI has is overshadowed by the overwhelming mediocracy it regurgitates.
This sort of reasoning is why you might have been called extreme.
It's less extreme to say "many people see and/or get lots of benefit, but it's wrong to use the tool due to the harms it has".
There's nothing wrong with extreme, but since you asked.
Yes, declaring AI to be 99% hype just turns away people like me from what the author has to say.
I was an AI sceptic for a long time until toward the end of last year when I seriously evaluated them, and came to realise it could add tremendous value.
When someone comes along and declares that it's all hype, it goes against my experience that it's getting things done.
I can also see the harm it does, and I hope the tooling improves to reduce that harm. For example, there's a significant lack of caching in the tooling. It's constantly re-reading the same files every day, and more harmfully, constantly fetching the same help pages and blog-posts from the web.
If it had a generous built in HTTP cache, and instruction to maximise use of the cache, then it could avoid a lot of re-fetching of content, which would help reduce the harms.
Declaring my experience to be invalid and based on nothing but hype doesn't engage people like me at all.
And it's the people like me, the middle-of-the-road developer working on enterprise software, that either need convincing to not use the tools, or for our habits to change to minimise the harm.
Because otherwise we're quietly getting on with using it, potentially destroying forests and lakes as we do.
It’s worse than that, in the linked “I’ve done my research” they make the tired claim that ai hallucinates api calls. Which while true has not been a practical problem since tool calling was added.
I think the position that ai is morally troubling enough that the downsides out way the positives is perfectly defensible. But the entire argument becomes a joke when you can’t accurately catalog the positives.
At this point, I’m pretty sure saying “I’ve done my research” is more of an indicator that someone hasn’t done their research but would like to be taken seriously anyway by pretending they did. The kind of person who’s both smart enough to realize that an issue might be more nuanced than they present it, as well as intellectually dishonest enough to… not care.
> Declaring my experience to be invalid and based on nothing but hype doesn't engage people like me at all.
> And it's the people like me, the middle-of-the-road developer working on enterprise software, that either need convincing to not use the tools, or for our habits to change to minimise the harm.
> Because otherwise we're quietly getting on with using it, potentially destroying forests and lakes as we do.
His point in the article is that if you are aware of the potential harms and are still willing to "quietly [get] on with using it", you're complicit in the harms caused.
In your response, you seem to be both (1) acknowledging his points about potential harms and your willingness to proceed even with awareness of those harms, while (2) simultaneously absolving yourself of any responsibility caused by your participation, by claiming the poster didn't do enough to "engage people like [you] at all".
>If it had a generous built in HTTP cache, and instruction to maximise use of the cache, then it could avoid a lot of re-fetching of content, which would help reduce the harms.
While this is a great idea, the harms are somewhat overblown. The big scare number for water consumption includes water used in power generation which itself includes evaporation from hydroelectric power.
> Lines Will Move Further Away If They Aren’t Defined
Is that necessarily a bad thing?
Sometimes I think some line is important, then I move closer to it, and realise the line is less important to me, and so I'll be less cautious of it.
Some might say "slippery slope!" or "boiling a frog!", but I think of it as me updating my values as I learn more.
Some people are prone to black-and-white thinking, and so I can see why they might be drawn to hard lines.
reply