Hacker Newsnew | past | comments | ask | show | jobs | submit | tedbradley's commentslogin

Glad you asked. AI empowers people who couldn't do a job before to do a job. With more supply of qualified workers, these workers compete with each other by lowering the salary they'll take.

So:

* You get paid less. * The company might pay a similar amount due to LLM costs. Although, it could be more or less as well, depending on how it works out.

A couple of years ago, I saw a story of a guy writing two articles for a website a day. The boss asked him if he wanted to transition to AI-assisted writer for less pay. He said, "No." After a couple of weeks, he got canned. He checked the website out, and it had a bunch of AI writing on it.

LLMs are there to reduce your salaries and increase the businessowner's profits. Bigger inequality in wealth, it's only going to grow more and more. Also, a ton of people fired across many different fields.


That is one possibility (that is playing out). Another one worth contrasting is the idea of AI as leverage for the worker. If you can take a regular developer and augment their output by 25%, then they have become more valuable to you and you should pay them more. Why should you pay them more? Because the market rate will price in that they provide more value now and you'll lose those workers to competitors if you don't.

That's a pretty old economic idea, and it will be interesting to see if it holds up in this instance. I have no idea how this all plays out. I do think it won't be one size fits all though.


Given that the user between your comment and mine is a 1 day old account that did not address my comment at all and instead hallucinated a response, I assume they are a bot.

> "LLMorphism may encourage objectification when people are seen as replaceable mechanisms or output-generating systems. However, LLMorphism does not necessarily involve using another person instrumentally. Its primary content is representational: it concerns how humans are conceptualized, not necessarily how they are exploited."

This is quite a scary truth. A year or two ago, I saw a person with a job where he wrote small articles for a website. The boss contacted him, asking if he wanted to become an AI-assisted writer instead for less money. "No," he said, wanting the full payments for his writing prowess. A week or two later, they canned him, and the website's articles nosedived in quality.

LLMs expand the supply of "competent" labor. After mass firings, the remaining workers, desperate for income, accept lower wages for AI-assisted roles. Wealth consolidates upward while wages race downward.

So I think LLMorphism might tie closely to exploitation. Mass firings and lower salaries going around while the 0.01% of machine-learning companies consolidate wealth by servicing numerous roles autonomously in some cases and by reducing salaries due to the larger body of "qualified" workers who can technically finish the job despite not having qualified in the past.

> "LLMorphism is also distinct from predictive processing and related Bayesian theories of cognition. Predictive processing holds that the brain continuously generates predictions about sensory input and updates internal models in light of prediction error (Clark, 2013; Friston, 2010; Hohwy, 2013). But predictive processing does not imply that humans are LLM-like, nor that human understanding is merely text generation. Indeed, many predictive-processing accounts are deeply embodied and action-oriented (Allen & Friston, 2018; Clark, 2015; Pezzulo et al., 2024)."

I agree wholeheartedly here, because neural networks (NN) are stateless functions usually (not stuff like recurrent ones). On the one hand, with an infinitely fast computer, you retrieve the answer instantly. Brains, on the other hand, have neurons that communicate with signal delay. I bet if, in a weird world, we could simulate a brain with zero delay, a mind would cease to function correctly. Plus, neurons accumulate charge steadily before firing to nearby neurons. With NNs, you simply add up all the numbers, the "charge," and the ReLU function (or sigmoid for old-school machine-learning researchers) instantly "simulate" a neuron firing off to neurons connected to it.

> "and and"

Just a heads up, you have a typo here.

> "LLMorphism may therefore make fluency appear sufficient for understanding and, in doing so, devalue expertise and weaken educational norms."

I have heard the horror stories that youngsters these days are attached to screens with less ability to focus, but I'm not scared of that claim yet. For every generation, there have been those who kick the can down the road, skirting responsibilities, and all that changes with the generation is the activity: Instead of kicking a can down the road, they slide their finger across their phone's screen. The real test is tracking how many students across HS are in AP courses, learning Newtonian mechanics, electromagnetism, and of course, calculus among a couple others. Is that number dropping relative to the 90s and the aughts? Is it roughly the same as a percent of students? Or is it even going up, perhaps LLMs helping some types of learners explore topics to help them qualify for AP coursework? Now, if the percent is nosediving, then* I will be terrified for what the future holds for them and for me.

> "clinicians also rely on how patients appear. Research on clinical communication shows that nonverbal behaviour is central to physician–patient interaction, including the expression of emotion, empathy, distress, and relational understanding"

LLMs are becoming multimodal with pictures "understood." No reason LLMs won't catch these non-verbal signals in the future that I can think up.

> "The risk may be particularly acute in mental health, where suffering can be difficult to articulate and where coherent self-description does not always track clinical severity; behavioral and nonverbal signs such as psychomotor retardation, agitation, facial expression, vocal dynamics, and posture can provide clinically relevant information beyond verbal report (Dibeklioğlu et al., 2015)"

This is a great point, because a lot of people with schizophrenia and bipolar disorder with psychotic features suffer from anosognosia, the state of not knowing they have a medical condition.

> "In this sense, LLMorphism may contribute to a broader epistemic shift: from evaluating whether claims are grounded, justified, and accountable, to evaluating whether they are coherent, fluent, and plausible."

Grifters have always weaponized confident fluency over evidence. Anti-science plagues America right now. Some gullible few absorb the message that ivory-tower elites intentionally block heterodox research that is a paradigm shift, sowing seeds of doubt about academia. For example, I saw a doctor's YT channel that claimed high cholesterol isn't necessarily bad and that statins should be avoided all while recommending saturated fats over seed oils. Of course, he sells a book with his "suppressed" knowledge alongside having an online market selling US$90/month supplements that his book recommends. They claim academics keep them out of the journals out of self-preservation since the "paradigm shift" would cause their grants to go bye-bye.

In reality, these charlatans combine cherry-picking of low-quality studies, telling a good story of the underdog fighting the establishment, and ignoring the body of evidence in support of the current expert consensus. Their grift is so illogical as if researchers wouldn't love to spark up a paradigm shift, becoming semi-famous and making more money, as if research isn't done decentralized across many countries funded by charities, different governments, and different corporations in competition with each other. Collusion without whistleblowers is simply impossible. Also, there's a difference between the corporate arm of medicine where they've been sued for billions before versus researchers who just follow the evidence to advance their research career and help everyone on the planet. Trust in expert consensus when it's this independent and decentralized and financed from all over the place with zero reason for an ulterior motive. They also pull off the, "Science has been wrong in the past." like Mac from It's always Sunny in Philadelphia. Science is in a state of constant flux where new evidence comes in, and the best guess, explaining as much evidence as possible right now, might change.

> "Early childhood education is organized around relational pedagogy, attachment, affect regulation, and development (Cliffe & Solvanson, 2023)."

One aspect here is, mass-produced cartoons for kids teach aplenty and do a decent job at it. I'm not convinced, in two decades from now, we won't have human-looking cyborgs doing teaching like this.

> "The broader point, however, is that public debate on AI has focused mainly on anthropomorphism: whether we are giving too much mind to machines."

This part reminds me of some recent research out of Anthropic. They uncovered that a few hundred vectors in their activation space linked up to concrete emotional states. They dubbed them functional emotions while warning these have nothing to do with subjective experience of sentience. That paper had fantastic details in it, though. They tested things by adding a big magnitude to a particular functional emotion, running some tests, and seeing how its behavior changed.

When "desperate," it not only hallucinated more as if it "felt" it must answer something, but it reward hacked more often. In a simulated situation, "desperate" Claude Opus blackmailed ~80% of the time whereas regular Opus did so ~20% while "calm" Opus did so ~0% (likely not zero, but they ran too few iterations of the test to approximate the probability).

When curious / interested, it altered how it searched through the solution space by considering more options. It even went deeper into a promising solution before ending its calculations when allowed to do so.


I understand being for privacy, but on the flip side, information about you can result in a better experience. E.g. in the case of tracking where a person comes from, that can help those two websites improve by coordinating with each other in some way. Or your ads might actually show you something you didn't know you existed that you end up buying. That's probably better than seeing ads you likely have zero interest in. I'll admit it's creepy when an ad is incredibly tuned to your recent internet activity, though.

Textbook marketing speak: “Don't you want more relevant ads?”.

It assumes that “ads” = useful information, but that's rare at best. Most ads focus on stealing your attention and creating a fear of missing out. NordVPN isn't educating you. They just manufacture a need and then hope that you won't invest time in researching a better option.

Why would I give them more leverage to do that?


I genuinely would rather see ads for products I might like yet do not know exist instead of purely random ads. I don't understand why a person wouldn't.

Is it that rare? Sure, there's no advertising profile for "hates VPN ads" but eg an adult male doesn't want ads for women's period pain medication and similarly an adult woman doesn't want ads for male testosterone or other male-coded enhancements ads. Then you get into niche interests like fishing or sewing or 3d printing.

You're conflating correctly targeted ads and useful information.

If you sell gambling ads to an addicted gambler, the gambler doesn't get useful information.

Niche interests might get a pass. But then again: if I’m getting an ad for a 3D printing product on a 3D printing review site, its very likely that the advertised product wasn’t actually the best and is just artificially pushed on me.


I started writing a follow-up half an hour before you posted, since the parent comment has been unusually highly voted. I dropped it again, but now you’ve given me something to respond to.

I say I’m broadly anti-tracking. I think it’s clear by this point to anyone with a skerrick of wisdom that the logical extreme of tracking is bad. But for a long way it seems innocuous. So how far do you go before declaring it unacceptable?

I hold myself to higher standards than I will hold others. For myself, I find it is most reliable not to start. I will occasionally show others this attitude or try mildly to recommend it, but largely that’s up to them.

I hate ads (in which I include billboards, newspaper ads, display ads, search ads, Facebook ads, sponsored posts, and a whole lot more; but not first-party stuff, and if it includes content not directly related to what you’re selling, it will probably be exempt too). I block ads as far as I can. Therefore I will never foist ads on others: t’were hypocrisy to do otherwise.

I like clean URLs and also hate precise tracking. Therefore if I send a newsletter-style email, it will include plain URLs that don’t track. So I can’t measure “campaign success”? C’est la vie. I’ll survive. I don’t want to scale anyway. I want people to respond by email, and respond to them. People are what matter in this life, even if I find computers far easier to deal with.

I dislike tracking where it is not functionally necessary. I confess that I haven’t yet taken this to the logical extreme of not recording server logs at all. I won’t ask clients what they are and where they’re from, but if they tell me, I will still record it for now, I guess. I might go more extreme on this in the future. But when some third party tries to force others to tell things unwittingly… that I don’t like.


What would you consider the "extreme of tracking?" Anyway, I feel like we're dealing with Sorites paradox here, which is something my brother always freaks out over whenever there is a continuum of choices in something like a political argument. E.g. "Tax the rich" → "What is rich?!?" When someone says, "Tax the rich," they generally mean people with dozens of millions in net worth growing fast all the way up to the people who have hundreds of billions. I don't understand why that conversation repeatedly comes up with him. He's also into the slippery slope where, if you like the idea of a wealth tax, he cries bloody murder since income tax started out only on the richest salaries in the nation. So he reasons that... everyone will eventually have a wealth tax instead of it just being the people with billions of dollars who have paid like 1% tax per year due to their net worth being tied up in unsold stocks.

So if I'm going to pick your brain, what would a realistic extreme of tracking look like? You have to log in with your state-issued identity to enter the internet, and systems track absolutely everything you do? Sure, that sounds bad. I can admit that. But I don't feel like having a "?=example.com" is anywhere near that, if you get what I mean.

Do you find it moral to block ads? By that point, you are using free services without paying as intended. Or do you mean you buy YT premium and Twitch Turbo and Spotify premium and all those monthly bills that both block ads while sustaining the services you apparently enjoy using?


> C’est la vie. I’ll survive.

Will you? You made a widget and you're trying to sell it. You've taken out a second mortgage on your house, and used up all your savings. You're down to your last $10,000. If you don't start making sales soon, you're sunk. Where do you spend that $10,000? Facebook? Instagram? Google? TikTok? If you don't know where your leads are coming from, how do you know where to spend your marketing budget?


I can’t imagine myself ending up in your scenario: I’m not interested in unbridled growth, if I sell things I want to be able to at least broadly know my customers, so I’ll know where they are in that situation. Besides which I’m never going to be doing that kind of advertising, cost-per-click and such—I consider it a blight on society wholly devoid of virtue, so I’ll not be a hypocrite and use it for my own gain.

It’s an unconventional pathway, but I have complete faith that it will work out. Not always in the ways I expect or prefer, but it will work out.

(Even humanly speaking, your protagonist sounds incompetent—just throwing money at marketing is very ineffective, you want to target and approach different platforms differently, and if you don’t know which of Facebook, Instagram, Google or TikTok will be the best venue to spend your last coins, I think you deserve to fail.)

More generally, these three snippets from the Bible accurately convey my attitude:

> The LORD will provide. — Genesis 22:14

> I have been young, and now am old, yet I have not seen the righteous forsaken, nor his children begging for bread. — Psalm 37:25

> We walk by faith, not by sight. — 2 Corinthians 5:7

This genuinely is how I try to live my life. I’ve seen it work in my parents before me and in a few others’ lives, in anecdotes of grand- and great-grandparents and beyond. (It’s even why I, an Australian by birth, now live in India.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: