Maybe local models can address this, but for me the issue is that relying on LLMs for coding introduces gatekeepers.
> Uh oh. We're getting blocked again and I've heard Anthropic has a reputation for shutting down even paid accounts with very few or no warnings.
I'm in the slack community where the author shared their experiment with the autonomous startup and what stuck out to me is that they stopped the experiment out of fear of being suspended.
Something that is fun should not go hand-in-hand with fear of being cut off!
> On average, Adam puts in 0-10 hours of deep work a week. The rest of his work hours are spent mindlessly coding, listening in on various meetings with his camera off, and on TikTok.
What is mindlessly coding and how does that work? Seems like the author is dismissive of meetings and some coding tasks.
Mindlessly coding is stuff like gluing together libraries, components, and services that do the actual work, building simple CRUD UIs, copying AWS configurations to terraform, etc.
I wouldn't call that mindless, just not as cerebral as some tasks. But mindless or not, if it needs to be done then it's still productive work. Why the author feels that only "deep work" counts is beyond me.
Right. Some of my programming work is done sitting back in my chair, a movie going on the TV for background noise, probably daydreaming a little while I write code or documentation or do basic tasks. Then there's other "deep work" where I stand up at my desk, turn off anything in the background, and block everything else out of my mind until I've finished the task. It's possible that I'm 10x more valuable during the latter periods than the former (1000x is just silly).
If I could turn the first type of work over to an AI assistant so I could spend that time napping or working on a hobby or side interest, that would be cool. But that's not the goal of companies pushing AI. They figure if I don't have to do the "mindless" work anymore, I can do "deep work" for my full 40 hours. But maybe I'm not capable of working more hours at that level of focus. Maybe the hours spent doing "mindless" work are necessary for resting and recharging between the intense sessions. So if my company takes away the "mindless" work, they're probably not going to get the jump in productivity they're going for. They're more likely to burn me out and send me looking for a new job.
I can't say for Anthropic, but I've seen Google hire people working on open source projects that were aligned with the skills they were looking for. Desktop search and collaborative editing comes to mind, although I might be mis-remembering?
At some stage Google will need to be accountable for answers they are hosting on their own site. The argument of "we're only indexing info on other sites" changes when you are building a tool to generate content and hosting that content on your own domain.
I'm guilty of not clicking when I'm satisfied with the AI answer. I know it can be wrong. I've seen it be wrong multiple times. But it's right at the top and tells me what I suspected when I did the search. The way they position the AI overview is right in your face.
I would prefer the "AI overview" to be replaced with something that helps me better search rather than giving me the answer directly.
>But it's right at the top and tells me what I suspected when I did the search. The way they position the AI overview is right in your face.
Which also introduces the insidious possibility that AI summaries will be designed to confirm biases. People already use AI chat logs to prove stuff, which is insane, but it works on some folks.
> The argument of "we're only indexing info on other sites" changes when you are building a tool to generate content and hosting that content on your own domain.
And yet, "the algorithm" has always been their first defense whenever they got a complaint or lawsuit about search results; I suspect that when (not if) they get sued over this, they will do the same. Treating their algorithms and systems as a mysterious, somewhat magic black box.
I've personally felt this way about many proprietary tech ecosystems in the past. Still do. I don't want to invest my energy to learn something if the carpet can be pulled from under my feet. And it does happen.
But that is my personal value judgement. And it doesn't mean other people will think the same. Luckily tech is a big space.
> Uh oh. We're getting blocked again and I've heard Anthropic has a reputation for shutting down even paid accounts with very few or no warnings.
I'm in the slack community where the author shared their experiment with the autonomous startup and what stuck out to me is that they stopped the experiment out of fear of being suspended.
Something that is fun should not go hand-in-hand with fear of being cut off!
reply