This really only presents the hypothesis that artificial intelligence (AI) will significantly impact the final stages of various processes, often referred to as the "last mile." However, she does not provide substantial evidence or detailed arguments to support this claim. The article lacks specific examples, data, or references that would substantiate her position, making it more of an opinion piece than a thoroughly backed analysis.
Don't get me to be wrong I would love this to be true.
I wasn’t expecting a research paper. More a vision paper.
But there are examples given of gardeners and bakers. I don’t know how many actually rely on SW to make their day to day decisions. So the examples may not be super accurate. But the point still stands that no matter how big the AI models get, you can’t model for this variable called the last mile.
Context is a challenge for LLMs, but the challenge feels of a different quality to me, than the challenge of incorporating local context into automated decision-making AI like algorithmic hiring, banking decisions, and real estate valuation like Zillow. These examples are more like "pre-LLM" machine learning, and it's not clear to me that LLMs are inherently limited in the same way. If anything, I think there's potential for LLMs to more flexibly handle a much broader variety of local contextual information by ingesting natural language rather than non-LLM machine learning systems where how to featurize or represent this information is typically quite bespoke. Take the neighbors' practicing death metal in their garage every Sunday and its impact on house valuation - it feels harder to get a non-LLM ML system to "understand" this, as a very sparse "feature", than an LLM.
Was thinking about this today in context of hiring.
We have these amazing LLMs that are continually improving. Yet if you say to them, here’s my business, now takeover the marketing department. You will end up with so much output that’s not localized that the value of the whole output is worth very little. Yet when you have a highly experienced localized marketing leader use the LLM to speed up work the whole output is very valuable.
I don’t think this problem is solved by solely defining preferences better. It’s clear a human adaptation layer beyond solely RLHF is needed for at least the short term.
Agreed. The problem is that many SaaS products want to integrate AI into their systems in ways that strike this balance, but in my experience, giving individuals an OpenAI account and letting them figure out how to automate the boring parts of their job is more effective. However, different people in the same job will likely use AI slightly differently, and it can be hard to find the right abstraction that suits a SaaS product.
And then, like it or not, companies are hopping on the agent train hoping to automate out a percentage of their headcount because that’s how they’re being pitched on it behind closed doors.
You seem to want a model that can be specialized to your own data.
We need a model that can learn new knowledges on the fly. Not by putting it in its context/prompt, but one that can somehow store new informations in its weights. If each user can have a modelxmemory combo that will add so much more utility.
Or, maybe we'll be able to train claude models once the training cost go down enough.
Don't get me to be wrong I would love this to be true.
reply