I can't say, yet! Hasn't struck. They'd like users, aren't forcing the issue. Very reasonable/measured so far. Should expectations appear [and I were to play along], I maintain they would be sorely disappointed. Simply don't believe we're limited by content generation/consumption.
All beside the point, anyway. I'll worry about meeting agreeable expectations in the next place... where I can renegotiate my side of the terms, too. The work doesn't really call for it, I'm already more productive than my enabled peers. Not pressed, options exist (both internal and external). Competitors more to my liking surely exist. I'm entirely fine failing to meet demands that I don't believe can/should be met. Call me fortunate [and perhaps naive] :)
My 'agents' were called 'pipelines' 20 years ago, they serve us well. The... 'real world' logistics need to be considerably shortened before an agent [or more pipelines] might have any meaningful impact. We have all the code/docs/whatever we might need, and a lot of built-in downtime, so I suspect it's a wash. Moving parts or people to datacenters, for instance.
All that's not to say an LLM can't be useful. They could spare us some shoveling, so to speak. Less work, not necessarily further or faster. Easier. There's not a lot of juice to squeeze and I'm not sure one should be willing [without proper consideration/compensation].
Ive been wondering as well and it seems acceptance is the only way. The evidence keeps piling with every successful larger and larger GitHub project we see
I'm taking the bait whatever. All those projects are just more fucking AI tools. It's all Claude seems to be good for - writing agents, skills, harnesses. Just a big fat ouroboros.
(Going down the /trending page - 13 of the 14 are some flavor of context manager or agent or smth)
Let me know when someone uses Gas Town or openclaw to write something that isn't "the next Gas Town or openclaw" and then we can talk
I think this framing is wrong. we have to learn to accept that it's not stealing. It's a new world where it's fair use and we don't know how to deal with it.
If we accept this then we can do something about it. Without it no one will heed us.
I understand your point, but at some point someone needs to think about morality.
If you or I copied and reimplemented nextjs in a better way, it doesn't feel as wrong. But when a large company does that and then brag about it, it's in poor taste.
Especially pointing out one developer and 1000 usd of tokens replacing the efforts of hundreds of talented developers. There's people on the other side of the screen.
This matters less and less in the new world. that fact that a fully compatible 10x faster clone came up, and is continuously working and adapting/improving, tells you that this is hugely valuable. It has users and it's thriving.
Caring about taste in coding is past now. It's sad :( but also something to accept.
Yeah, I tried to use this clone of pi for a while and its very, very broken.
First of all it wouldn't build, I have to mess around with git sub-modules to get it building.
Then trying to use it. First of all the scrolling behavior is broken. You cannot scroll properly when there are lots of tool outputs, the window freezes. I also ended up with lots of weird UI bugs when trying to use slash commands. Sometimes they stop the window scrolling, sometimes the slash commands don't even show at all.
The general text output is flaky, how it shows results of tools, the formatting, the colors, whether it auto-scrolls or gets stuck is all very weird and broken.
You can easily force it into a broken state by just running lots of tool calls, then the UI just freezes up.
This is incorrect. IN the older world, you would not say intelligence autocompleted code was not copyrightable. Agentic code is the same on steroids. The strong argument: It's absolutely auto completed code for the prompter, and hence fully copyright-able by them.
The solution is simple, but unpalatable to us. With AI, SWE-1 becomes a minimum wage job, with SWE2 (1.5X), SWE3 (2X) and SWE4 (3x). With such a rationalization we will retain more of the work here, or this will move. Government policies cannot control this as it will mean losing tech hegemony.
Is it worth taking a hit on higher compensation for longer term peace of mind?
Then why companies aren't offering minimum-wage SWE-1 jobs already? Could it be that the output of an AI tool still needs a modicum of skill and craft to evaluate?
Well not exactly. An internship is a temporary position, which people mostly just take to improve a CV at an early career stage, or as a fallback after being laid off. A "minimum wage job" is... A job.
Hell I had an internship in 1995 and they paid $10 an hour then and provided housing.
For context, my take home was $650 every two weeks - my total quarterly tuition at school and the next year the cost to rent a one bedroom in the northern burbs of Atlanta.
If we're honest - you are not being stopped from mentoring developers. You're saying you'd want to do it on time paid for by others, and competing with teams not doing that.
So the actual pain point is compensation, and IMO we should directly call that out and address it.
reply