Hacker News new | past | comments | ask | show | jobs | submit login

To be frank this is the opposite of how you should use an LLM. It's not going to be useful for diving deep on stuff that needs factual accuracy. It will on the other hand be useful for giving you shallow overviews you can drill deeper into, maybe even by helping come up with good search terms, or sometimes interrogating your thought process



Deep diving or search would be tough to do in real time. Just rubber ducking while listening to economics and business type podcasts works great, accepting the limits of the tool and domain. It's no scholar in your pocket yet.

Sometimes hosts/guests gloss over topics as they are trying to sell their book as the one way find out more. LLMs can enhance the conversation as if the host drilled in a little further on some sore point similar to if it were a popular post with lots of comments. Not that those will be free of LLM influence for much longer.


I mean I get that people are doing that but it mostly seems like a way to poison your information stream by multitasking in-depth recitation and thereby making yourself vulnerable to what I'd call "recall exploits", where people remember some wrong information and can't even remember where they heard it, possibly misattributing it to the podcast itself, kinda like old people do to themselves constantly by just casually having cable news on in the background. Even if you assume no malice, like you're self-hosting your own fine-tuned llama or whatever, lack of accuracy is enough for this to be a problem

Like I'm aware that 80% of econ and business talk is gonna be BS anyway but turbocharging that effect seems unwise


On the contrary, not using LLM extensively makes you more susceptible to slop in all forms as you haven't learned to avoid it by seeing how the sausage is made. Most bias you see in media is surface level anyway it's not deep divable, cable news is such a limited hangout it could only add to the experience to push against in a pinch like OP's debate summary.

For some reason people trust the error rate of the internet over the single entity LLM, it all depends on correct use and remaining sceptical. Either way the same gambit that got everyone online will get everyone on LLMs: I'll 1000x your knowledge for some low % error rate (reducing every month).

It's like muskets that didn't shoot straight either when they first arrived but you could compensate and you'd be unwise to stick to the broadsword because of tradition. Ask Scottish highlanders.


Eh, I was already good at checking things in parallel from a combination of wikipedia and google scholar, and so it seems like this use case of LLMs is just a more expensive, higher error rate version of a workflow a lot of integrated multitaskers have been good at for literally over a decade. Multimonitor or multi-device workflows with known good sources still beat LLMs for augmenting linear information streams with detail we're mostly going to fail to retain, especially because in nearly every such use case, the cost of negative errors (omissions) is essentially nonexistent but the cost of positive errors (fabrications) can be significant. These outsized claims of automated efficiency have yet to bear out in any tangible way for any real person I've encountered, because even for the wikipedia version of the use case, retention rate is extremely low, and an LLM is inherently an unaudited source and therefore we have less good probabilistic epistemic guarantees about it

As I've said before, I am using LLMs in my workflows, primarily in places where either factual accuracy doesn't matter (ideation/elaboration is a great use of any generative model) or it's feasible to both test the output in a tight feedback loop and obtain primary sources from which it's drawing information. They're also exceptionally good at translating fuzzy questions into actionable search terms, a use case that google's been trying to approximate poorly for years and which probably would have made a much bigger splash as the gamechanging practical NLP result it is if they hadn't (it looks like incremental improvement from an end-user perspective but decidedly isn't). Everyone who tells me how "efficient" they're getting from adding yet another captured passive stream to their already oversaturated information diet seems no more competent or knowledgeable than prior to the widespread usage of LLMs, but they are about 1000x more smugly assured I'm a luddite for thinking their unfathomable credulity in their naive use case being the future is overwrought (and are, statistically, also likely the people who buy the smear campaign by industrialists that's resulted in the modern usage of that word)

From my perspective you're the guy swinging the musket like it's a broadsword, and I would very much like for fewer people to be doing that with such enthusiasm, given the hazard to benefit ratio. But at least you're not the guy trying to chop vegetables with the musket, which unfortunately is the level at which a decent chunk of the business world currently functions in this analogy


To ruin your analogy remix further it was actually bayonets affixed to muskets that defeated Little Bonnie Prince's army. Again confident compensation won the day.

Lies by omission are worth railing against even if the first guess at those omissions are wildly wrong, the important thing is you've started the refinement process instead of letting the implied falsehood fester and mix with other choices you might make. The hazard of LLMs is only for people who expect to be babied. The people who want no one else but the elite knowledge experts using LLMs are just chuffed at their own trivia recall abilities and perhaps a little afraid of being easily shown up after dedicating so much effort to the legacy way.


lol you really are one of these AI cultists huh? Do I get to hear "it's a godsend for the untalented" for the 28043rd time?

Your proposed model for adoption doesn't "show up the elites", it destroys the ability of anyone who doesn't already have elite-level epistemic integrity checking skills to ground themselves in reality, and honestly may well over time subvert even decent epistemics via automation blindness. Using new tools is something that people should absolutely do, but using them stupidly for the sake of using them creates a small-to-modest momentary edge for an infinitessimal fraction of this especially reckless class of early adopters who luck out while most of them just serve as guinea pigs as the world figures out what the things are actually good for and irons out the rough edges. Regulating the way you ingest new information such that you can tell fact from fiction isn't "trivia recall", it's a way to retain a semblance of soundness of mind. It's readily apparent from the increasingly unhinged landscape of conspiracy theories that it is in fact not better to wildly guess at gaps in your knowledge if those guesses have a chance of becoming sticky, and human brains are a lot better at filling in gaps than unfilling them. You're not making a good case for not grounding usage of these tools in a realistic understanding of what they're good at - even setting aside the nominative determinism that's becoming increasingly apparent in your replies




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: