Hacker News new | past | comments | ask | show | jobs | submit login
Can we cut the AI Agent for X thing already
26 points by MenesJo 31 days ago | hide | past | favorite | 22 comments
When did we move from describing how we solve problems to just putting "AI Agent for X" on everything. I get the pitch that we want to automate industry functions, but all the tools doing this are nowhere even close in doing this.

E.g. every "AI Agent for Outbound" is doing the same things outbound tools do from 5 years ago with a bit of prompting.

Every "Your AI QA Engineer" is basically a shitty no-code recorder tool.

Can someone help me understand if those pitches actually work?




I think there is a certain class of leaders who have a ton of contempt for folks under them, like 3+ levels below them. To them, they're an incompetent problem to be excised, and if they weren't in the mix initiative X would already be done or initiative Y would be possible.

To those leaders, AI Agent for X is a completely unproven answer to their prayers.


If you've read Tribal Leadership, you'd recognize these as "stage 3" leaders.

Their core belief is that if everyone would just be more like them and do what they said, all of their frustration and problems would go away.


Never heard of this book before, and yet I was immediately tuned into what you were describing: the “I’m better than everyone else” types who begrudge others for not being imitations of themselves, as opposed to recognizing the benefits diversity brings in strategy, execution, and survival.

I should add this to my backlog, it sounds like.


The main notes of Tribal Leadership:

tribes are the organizing unit of human

culture is the driving force, and can be observed though the language people use and the relationship structures they have.

Stage 1: "life sucks", no relationships - gangs, prison, essentially you get ahead by taking advantage of people who don't realize life sucks.

Stage 2: "my life sucks", diads - if only some oppressive power authority would change, my life would be better. form diads and complain about the boss, etc.

Stage 3: "i'm great", hub & spoke - rockstar with supporting cast; top achievers; political orgs; win at all costs.

Stage 4: "we're great", triads - relationships organized around values and competency and working to advance people into Stage 4 consciousness.

Stage 5: "life is great", org flow. - when a majority of people in a tribe are in Stage 4, they will pulse into Stage 5 organically, no competition at this level, solving new problems, in flow with life itself.


Yeah, that's about what I speculated from the book's title and what little description I got from the thread. It feels very natural and organic, but also personal if that makes sense? Like, I've mostly pulled away from any of the S2's in my life, my unorthodox home/familial unit is S4, and work feels like a mix of S4 teams railing and struggling against S3 leaders and their S2 followers.

It's a good "cliche system" to help simplify the complex power dynamic strata in a large organization. I dig it!


If you apply the principles, you’ll learn how to develop someone from 1 > 2, 2 > 3, 3 > 4. Primarily through making the right kind of introductions. And you’ll grow in the process as you own & integrate your own earlier states.

It’s the foundation of our leadership development work at EarthPilot.org


There is also a clear class antagonism to this in my experience. People at that level of management have more in common with the investment bankers and VC funders than the people who work under them. These people dream of a world without workers, a wonderland of pure profit.


Send in the robots and AGI!


I just ordered this book based of this description. I've had a unique work history with this type of leader.


Most have. The book will teach you how to get leverage no matter your official title or position within an organization.


We can’t, because generative models are the current hotness and must be shoved into everything for company valuations to go up.

That’s not sarcasm, either. This is quite literally how company leadership and governance/Boards function. There’s such immense pressure on “following the market” in an attempt to capture actual growth (as opposed to growth-by-inflation or growth-by-margin) that Boards will toss out executives who don’t adopt or sell these products because they perceive it as failing to adjust to shifting market conditions. It also doesn’t help that the typical company executive is also on a Board of other companies, creating a conflict of interest in all but the most legal of contexts.

Remember that we - the typical HNews/Slashdot/Reddit/Engineer/Developer crowd - are not who these companies are marketing to. They’re marketing to C-Suites, and those types don’t care how the sausage is made so much as they care about being able to tell the Board or shareholders how the company is leveraging the latest product fad.


Hucksters (of which "advertisers" are a subset) will always jump on the latest fad, and attempt to push its value well beyond wherever it might really be.

It's just the normal huckster cycle at work.


The business world is largely just NPCs running towards the ball, whatever that ball is at a given time, and right now it's a very shiny ball.


I think a lot of the time it’s about jumping on the hype train. “We’re going to use shiny new AI Agent for X” is easy to understand for higher-ups as progress. This feels pretty normal for progress.

I do think some of the underlying technologies have merit in certain areas, but imo they’re mostly in customer service/success where actual agents are overburdened and thus can’t provide a good customer experience. Knowledge regurgitation is something current tech (LLMs) are good at, so to me this makes sense. Feels like in the longer term more technical functions will start to work better once reasoning becomes better.


I'm actually not so sure that LLMs are good at knowledge regurgitation. They're good at generating text that semi-plausibly looks like knowledge regurgitation (which may or may not be incomplete or wrong).

See the recent Google AI Summary mishaps for some good examples of this.


I’m thinking of knowledge regurgitation in the context of a very structured environment — a la knowledge base for a company & internal policies as opposed to the entire internet.

A better way to convey this might be that LLMs are good at being conversational and given the appropriate context and guardrails, they can regurgitate knowledge from said context with reasonable accuracy.

Google’s mishaps (eating rocks, etc.) demonstrate there’s still quite a bit of work to do for this to work at scale, but the tech is still pretty good.


I guess it depends on how you define agent. One interesting example I’ve seen is descript. Someone I know used to pay a video editor for about 5 hours of work to edit down recordings of interviews. Now the tool (agent?) quickly takes out the “umms” and “uhhs”, and he can similarly edit up choice bits extremely easily himself. So that video editor is just completely obsolete now (for him). I think we’ll continue to see a lot more of these kinds of tools.


> Can someone help me understand if those pitches actually work?

Define 'work'. Like, it's all nonsense of course, but VCs currently like it, so expect to see a good bit of it until this bubble bursts and the VCs are on to the next thing.


something's gotta pay the bills to keep Hacker News' lights on


the cycle repeats, in every tech transition, there is a bandwagon, and herd mentality amongst VCs and young tech-bros. which involves FOMO and me-too cloning.

in the sharing economy it was AirBnB for X, Uber for Y etc.

in the blockchain boom, it was BlockChain for logistics etc.

in the crypto boom it was NFT for this or that.

in the SaaS era it was cloud CRM, cloud HR etc.

in the dot-com mania, there was pets.com and webvan.com


Unfortunately, sloppyjoes are gonna slop.

When someone’s rationale and motivation lack reason it’s difficult to ask them self correct.

It seems Money is the slowest at coming around to identifying dark patterns in pitches. Until then, expect more.

Just ignore the bullshit, and work on what excites you.

It’s the only way to stay sane while watching this cycle replay itself over and over again.


Just wait til we get AI Agents for AI Agents for X. It's Agents all the way down.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: