The offered discourse on AI is “isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life.”
That doesn’t exactly leave a lot of room for people to feel the need to be involved in a discourse about it. For one thing the majority of people aren’t all workaholics looking for extra hobby time.
The author mentions ChatGpt can search the web. Okay calling a search engine and retrieving a result has been possible for a while. Llm companies just slapped statistical response on top as the UI.
Maybe the discourse sucks because the reality of it sucks?
If anyone wants to know exactly how that would be achieved, just look at how Google does "support" right now. No need to predict the future.
Google's "support" is a robot that sends passive aggressive mocking emails to those who were screwed over by another robot that made up reasons to lock them out of their digital lives [1]. It allows Google to save a ton of money while evading accountability.
It's the same thing with the latest overhyped robots. It won't even matter whether or not it's actually competent at the thing it's supposed to do. It will replace people regardless.
> I think the problem is that the statement is more like:
> "Eventually we won't even need you and you can go die in a ditch somewhere while we party on our very large boats."
Exactly. If...
>> “isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life.”
...was even remotely true, we'd have already had that outcome, before AI.
“Isn’t it cool that I don’t have to write boiler plate and can prototype quickly? My job isn’t replaced because coding is not my job — it’s solving domain specific problems?”
I’m in my late 40s, have written code for 3 decades (yes started in my teens) and have always known that the code was never the point. Code is a means of solving a problem, mostly unrelated to computers (unless you work on pure software tooling).
This is why I chose not to study computer science. I studied something else and kept coding. I’ve always felt that CS as a field is oversubscribed because of $$$ dangling due to big tech.
So many fields are computational these days and the key is to apply coding to these fields. For instance, a PhD in biology gets you nowhere these days so many biologists these days are computational biologists or statisticians. Same with computational chemists, etc.
For most of my career I’ve written code, but in service of solving a real world physical problem (sensor based monitoring, logistics, mathematical modeling, optimization).
> My job isn’t replaced because coding is not my job — it’s solving domain specific problems?
I would wonder why you are so complacent as to think next year's models won't be able to solve domain-specific problems? Look at how bad text generation, code generation, audio generation, image generation was five years ago versus how capable it is today. Video generation wasn't even conceivable then.
As an equally middle-aged person with children I'm less worried about myself than the next generation. What are people still in school right now, with dreams of being architects or lawyers or artists or writers or doctors or podcasters or youtubers, actually going to do with their lives? How will they find satisfaction in a job well done? How will they make money and feed and house themselves?
The economy only works because people consume goods and services. If they can't do that, then capital can't make any money. So whatever the case is, capital needs to ensure that the ability to consume is ensured.
This is the same conversation that happens decade after decade.
I agree with you, but no one listened back then, why would they ever think about listening now.
Capital formation comes first before everything else, not the other way around, when you have nothing to trade that's of value it simply can't happen, and inevitable hyper-inflation/deflationary cycles begin which once started can't be stopped.
These people think, survival is guaranteed, jobs are guaranteed, the how doesn't matter; it happens because some politician says it does; reality doesn't matter.
That's the line and level of thinking we are dealing with here. How do you convince someone that if they do something, they and their children may die as a consequence; if they can't make that connection.
Communication ceases being possible in a noisy medium at a certain point according to Shannon. Pretty sure we've crossed that point, and where we may have been able to discern and separate the garbage previously, now through mimicry its all but impossible.
Intelligent people don't waste their efforts on lost causes. People make their own decisions, and they and their children will pay for the consequences of those choices; even if they didn't realize that was the choice they were making at the time they made it.
> I agree with you, but no one listened back then, why would they ever think about listening now.
Because we lead vastly better lives today than 100 years ago, when everyone was also raging about technology stealing jobs. The economy has to adapt to technology changes, there is no other way. It is a self healing system. If technology removes a lot of jobs, then new jobs are created. It has to be this way, don't you see?
It can be a self-healing system, and capitalism is generally self-healing, but the former is not necessarily the case in all economic systems.
There is a critical point where factors, and producers leave the market because requirements cannot be met in profit in terms of purchasing power (invariant to inflation). You might think those parties are all that there is, but that's not the case, there is a third-party, the state and its apparatus.
With money-printing, any winner chosen by the state becomes its apparatus. Money printing takes many forms but the most common is debt, more accurately non-reserve debt.
That third-entity is not bound by profit constraints and outcompetes rising in the wake of the destruction it causes, and this is not self-healing, its self-sustaining, and slow, and it does collapse given sufficient time.
New jobs aren't being created in sufficient volumes to provide for the population. If anything, the jobs have been removed en-masse on the mere perception that AI can replace people.
You seem to rely heavily on fallacy in your reasoning. Specifically, survivorship bias. Things are being done that cannot be undone. There are fundamental limits, after which the structures fail.
You're saying I rely on fallacy, survivorship bias, but you have no way of knowing what is coming, and yet you state it so authoritatively.
I resort to evidence from history, because these same arguments happen decade after decade, and the doom scenario has not manifested yet. I also find the anti-AI view narrow minded. You're only able to imagine one scenario, the dystopian scenario. And yet none of know this is the likely outcome. It could well be that AI actually does increase the means of productivity, we invent new medical cures, we invent new ways to grow food, we clean up our energy generation, work becomes more optional as governments (who desperately want people to keep electing them) find ways of redistributing all the newly created wealth.
I don't know which will happen, and neither does anyone else.
This is naïve, the government and corporations are already working towards the dystopian result. Just because we don’t “know” doesn’t mean people can’t make an educated guess. You need people to put Llms on the good path before you can say the bad path won’t happen. Right now people are loyal to corporations that offer it, that’s the bad path.
Its like predicting avalanches in avalanche prone areas.
You may not know the individual particle interactions and forces that will inevitably set the next avalanche off, but you know it will happen based on factors that increase the likelihood dramatically.
For example the event of an avalanche increases the more snowpack there is, and it goes to zero when snowpack is gone. The same could be said of LLMs.
You know corporations will do absolutely anything even destroy their business model, so long as they make more money in the short term. John Deere is a perfect example of this, and Mexico just finally took action because we couldn't, that culminated in ~14bn drop in capex on Wall Street for the the stock. It was over 10 years in the making, but it happened.
The more concentrated the marketshare to decisionmaking, the greater the damage, and the more impact bad decisions have compared to good decisions. You tread water until you drown.
> You're saying I rely on fallacy, survivorship bias, but you have no way of knowing what is coming.
Just because you happen to be blind in this area, doesn't mean all people are blind. In the day after tomorrow, you had that group at the library that chose to follow the police officer despite warnings that going out into the storm would kill them. What happened? They died.
That is how reality works, it doesn't care about belief. Its pass fail, live die.
The thing about a classical education (following the greeks/roman western philosophy) is that you can see a lot more of reality accurately than someone who hasn't received it, and an order of magnitude more than someone that's been indoctrinated. You know the dynamics and how systems interact.
The dynamics of systems don't just disappear, there is inertia, and you can see where that is going even if you cannot predict individual details or a timeline. It is a stochastic environment, but you can make accurate predictions like El Nino/La Nina weather patterns with the right know-how and observation skills. Everything we know today originated from observation (objective measure), and trial and error.
This framework is called first principles, or a first principled approach. Its the backbone of science, and it ties everything that is important to objective measure, and the limits of error. When dealing with human systems of organization, you can treat the system in predictable ways at the sacrifice of some of the accuracy, but that doesn't negate it completely.
These are things that matter more than other things, and let one predict the future of an existing system, if carefully observed. Like a dam where the concrete has started cracking might indicate structural weakness prior to a catastrophic collapse.
It is not governments job to redistribute wealth. That is communist/marxist/socialist rhetoric, and it fails for obvious reasons I won't get into. Mises sums it up in his writings back in the 1930s. You like to claim you base reasoning on history, but you have to include parts that you don't agree with to actually be doing that.
Just because you don't know what will happen doesn't mean others can't. These are fundamental biases to your perception that rigorous critical thinking teaches you to avoid so you are not dead wrong.
There are people that see the trends before others because they follow a first principled approach, and they save themselves, or may even profit off that when survival is not at risk.
The blind will often cause chaos to profit, thinking no matter what they do individually they can't end it all. The exact same kind of fallacy that you seem to be falling into, survivorship bias.
There are phase changes in many systems. The specific bounds may not be known or knowable in detail ahead of time, but they have been shown to happen, and in such environments precursor details matter.
The moment you start dismissing likely outcomes without basis, is the moment you and those you care about go extinct when those outcomes happen and you are in the path of that outcome.
No one knows everything, but there are some people that know more than others.
It is a fairly short jaunt in the scheme of things from the falling dominoes caused by elimination of entry level positions (and capital formation as a whole), to socio-economic collapse (where no goods are produced or can be exchanged).
The major problem is no one is listening to the smartest people because they are no longer in the room, only yes people get into the room, the blind leading the blind. That has only one type of outcome given sufficient time. Destruction.
> I would wonder why you are so complacent as to think next year's models won't be able to solve domain-specific problems?
If your domain is complex enough as well as have a critical people-facing component you generally still have some runway. If it’s not then it’s ripe for disruption anyway, if not by LLMs then by something else. I pivoted at age 32 because of this. I pivoted again at age 40 (I took a two level title drop (principal engineer to midlevel), but I got to learn a new domain - and got promoted back to one level below and now make more money).
I always treat my marketability not as a one and done but a perishable quantity. I’ve never taken for granted that I’ll have job security if I don’t strategize — because I grew up in a time of uncertainty and in a society where a high paying job was not guaranteed (some jobs like grocery clerk were however). People talk about “job security” as an entitlement of life are the first ones to be wiped out.
That said, not everyone is capable of constantly upgrading their skills and pivoting — we need some cushion for economic disruption for folks who have limited retrainability. But suspect this is not everyone — most people just haven’t had to do it so they think they can’t.
Americans have not had to face this en mass in the last 30 years but many people around the world have had to. If you’ve lived in competitive societies where there is job scarcity you get quickly used to this reality.
> What are people still in school right now, with dreams of being architects or lawyers or artists or writers or doctors or podcasters or youtubers, actually going to do with their lives? How will they find satisfaction in a job well done? How will they make money and feed and house themselves?
I think those jobs will still exist in some form but there will be a painful period where everyone figures out how to be differentiated. I’m a hobbyist YouTuber in my free time (YouTuber was a job that didn’t exist before) and I think it’s hard to replace parasocial relationships — AI slop already exists on YouTube which gets views but few subscriptions.
The scope of jobs will also shift, and we will see things moving toward realms requiring human judgment — delivering things that require interpretation. Job scopes today are actually already much more than people think. Again no guarantee against disruption but job security was always an illusion anyway and the sooner we realize this the sooner we can adopt a preparatory mindset. (In a way, Americans are actually well positioned due to our relationship with capitalism)
Even the demise of radiologists has been overstated because being a radiologists is much more than just detecting disease from an image.
Writers will still be around — they might not be able to charge per word, but they’ll pivot to a new model. The transactional model will be gone but I’m convince something else will replace it.
I’m not sure about any of this because I can’t predict the future, but I have seen the past and the doomsday scenario doesn’t seem to me the inexorable one.
There are things being done which cannot be undone, and there are issues that were long predicted, and ignored, and the consequences are now bearing fruit.
If you haven't heard a real doomsday scenario that's likely, you haven't been listening to the right people, and you rely far too much on the fallacy of survivorship bias.
If you don't have a plan to replace a fundamental societal model, there are two potential outcomes, someone comes up with something because they've been working on it (and it works, which is rare), or all dependencies that rely upon that system fail, and the consequences occur. In other words, everyone starves.
Think about what no exchange being possible suddenly would mean, overnight, for our supply chains with logistics delivering just in time. We've seen it, during the pandemic, but that was just a small disruption, and not a continuing one.
Imagine it. Nothing on the shelves. No amount of money that will let you get what you need (toilet paper). No means that would let this occur in the short timetables of need. What happens. Prior to 2020, people would call you crazy if you said those things would happen.
Bad things happen if you don't have a plan to make sure they don't happen.
I think this is hilarious, because it's exactly the type of low effort response that tends to dominate general conversations about AI.
You are making the author's point.
I think there's a lot more nuance in
> "isn’t it cool how this can replace the work that you do? Eventually we won’t even need you and you can go get a hobby to spend the rest of your life."
than you'd like to admit, and some conversations that are worth having in earnest instead of simply resorting to trivial things like
> "Maybe the discourse sucks because the reality of it sucks?"
Maybe the reality of it doesn't suck?
In the same way that a reality where we have things like bulldozers, printing presses, looms, and a cotton gin doesn't actually suck, at the end of the day.
It absolutely sucked for some people, some of the time - and that's an important part of the conversation, but it's not the conversation "end of sentence".
> than you'd like to admit, and some conversations that are worth having in earnest instead of simply resorting to trivial things like
Sure the author wants to talk about technical specifics of Llms. Yet Llms enable a lot of people to avoid understanding even the technical points of it. That would disincentivize people from understanding enough to have discourse which the author considers valuable.
> In the same way that a reality where we have things like bulldozers, printing presses, looms, and a cotton gin doesn't actually suck, at the end of the day.
I really don’t care about the grand scheme of things type responses to criticism of Llms. But for the sake of argument why should I care about discussing Llms and their technical aspects if in the grand scheme of things we’re all to eventually die?
It is the end of the sentence because most people can’t imagine what comes next besides not having a job. It’s not that they won’t be fine if a super AI takes over tomorrow, it’s that is literally the limit of their concerns today is making money for themselves.
It might be different if Llms actually made the users richer but it doesn’t it makes the corporations richer.
> But for the sake of argument why should I care about discussing Llms and their technical aspects if in the grand scheme of things we’re all to eventually die?
Why do anything, then? This is the laziest possible retort I can imagine.
> In the same way that a reality where we have things like bulldozers, printing presses, looms, and a cotton gin doesn't actually suck, at the end of the day.
So you’re allowed that type of rhetoric, but when I use it, it’s lazy.
My point has been that it sucks, now. Right now, it’s hysterical on both sides of the conversation. So yes it sucks. In the grand scheme of things it may not suck or it could get even worse. Again one side of the conversation is choosing to promote only one of those ideas. Even though there is no evidence we will end up in a utopia from it. In fact there’s a lot of evidence to the contrary. So yes the conversation sucks. The reality right now sucks.
Yes, because - to be blunt - yours is so much lazier.
I picked machines that were undeniably controversial at the time they were introduced, because they did all the things you're claiming to be upset about here: They put people out of work, they enriched capital owners, they changed social structures, they altered governments.
Essentially - they are relevant discussion items for the topic at hand (if you're unaware, the general term "luddite" for use as "anti-technology" directly comes from the english textile workers getting replaced by looms, which they tried to destroy repeatedly, and were eventually suppressed with military force, with sentences including execution and exile to penal colonies).
That's not some blasé "waves hand 'technology good'" reference I'm making, and I think your response is partially so annoying because we likely agree on a lot of things about the potential negative impacts of AI.
I just think the way you're articulating it is relatively low effort, and I think the original post is absolutely allowed to say that. You'll get dismissed because you're so obviously wrong about the easily verifiable things that it's hard to take you seriously about anything.
Which is exactly the impact of comments like "Why talk about this because we'll all eventually die" - they alienate your allies because they are trivial and trite trash.
Okay well as long as we’re delivering low effort attacks, I totally agree and think the same of you. I can’t take your response or ANYTHING you say seriously. Good talk, you’re right there’s plenty of good discourse on AI between people. This conversation is a winning example.
No - I picked it precisely because it's a machine that improved efficiencies but undeniably had negative impacts as well.
I think that's my whole point - I'm not saying that the person I initially responded to is incorrect in not liking the impacts AI might have. I think it's a perfectly reasonable take to be concerned about how AI might impact you, and to express that, along with negative sentiments.
I'm saying that the argument they are currently making
> "Maybe the discourse sucks because the reality of it sucks?"
and even the slightly better
> "Okay calling a search engine and retrieving a result has been possible for a while. Llm companies just slapped statistical response on top as the UI."
Is a guaranteed way to be ignored and dismissed because it's a low effort emotional response - not an actual argument.
Those technological advancements with Llms are low effort advancements. So you only get low effort responses.
Do you understand why maybe no one’s wowed by browser automation/automated web search? Can you extrapolate why no one’s stoked to talk on Llm bots replacing them with low effort inaccurate “good-enough” fly-by-research summarization?
These are obvious for most people that’s why it’s low effort. You shouldn’t need to expound high-effort discussion just because you feel the low effort discussion doesn’t make a clear point or makes Llms look bad. The points are well discussed, and obvious. Hence low effort, hence sucky discourse.
Feel free to ignore and dismiss my perspective that doesn’t make me wrong or you right. It just makes you a bully.
Yea it's very polarized like everything seems to be these days. It's very similar to crypto in that people pumping it up for financial gain or because they enjoy being part of the current hype are overconfident in how much of an impact it's going to have. Unlike crypto my workplace is very into it for our domain, so it's something we have to deal with.
At my company most people do have a very balanced view of its capability, which is nice, but it's hard to find online discussion that isn't polarized. I guess it's also disheartening because if half of the optimistic projections are true, it's going to mean more kids cheating on homework, less people who are able to read and write and think critically, and more lonely people with AI girlfriends/boyfriends disconnected from human society.
> It's very similar to crypto in that people pumping it up for financial gain or because they enjoy being part of the current hype are overconfident in how much of an impact it's going to have. Unlike crypto my workplace is very into it for our domain, so it's something we have to deal with.
The worst part is that many people (investors) who are actively pushing such a narrative (crypto, AI, etc) not only most have an undisclosed position, but in private they are doing / saying the exact opposite.
That is the reason why you see them publicly screaming such nonsensical predictions and you then get companies like builder.ai collapsing. (You won't see any investor in builder.ai proudly announcing this news in their portfolio).
I propose we stop using the term "discourse" to describe what amounts to mobs with ever-shifting alliances shouting at each other.
Not really sure what I'd call it instead, but in a perfect world "irrelevant" might be my top candidate.
And yes, I'm aware that this very comment is an example of such "discourse". But at least with sites like Reddit and Hacker News the "comment-ness" of these little thought droppings is part of the UI.
Microblogging sites, though... the thought droppings are the point. I remain unconvinced that this will be seen in the future as a societal advance no matter the underlying positions or values.
Well, technically, GPT "itself" really isn't a search engine. The LLM was trained with capabilities to communicate to the runtime that I can request additional information for fulfilling the request, such as performing a web request - which is performed by the runtime and the results were fed back into the LLM as a regular prompt. Thus GPT only answers based on the textual context - again: involving statistical likeliehood.
Right, and I think that in this case the person making the original statement was probably erroneously using "ChatGPT" to mean "LLM" (in the same way that "Kleenex" is used to mean a disposable tissue).
It's true that ChatGPT-the-product can search the web (using a traditional search engine as a RAG datastore), and it's also true that o4-the-llm is not a go-out-and-find-external-truth search engine. I'm more sympathetic to the original point because many people do misunderstand what LLMs are and what they're capable of, and it's worth dispelling those myths (though it's valuable to do it precisely specifically to avoid this sort of confusion).
>AI art is often terrible and slightly creepy.
I have seen AI art that was fine. This is very rare.
This is just the "special effects" problem all over again, its use is invisible if done correctly. Only the bad produced SFX sticks out and gets called "CGI".
Much like a programmer using AI to supplement their existing knowledge, a lot of artists (way more than you think) are using AI to supplement production of their art. The product is indistinguishable from their non-AI work, it just takes a fraction of the time.
People just aren't talking about it because it is highly stigmatized by the art community.
Interestingly, the proliferation of digital sfx led to lots of jobs going away and the jobs that replaced them being poorly paid relative to workload. It also led to a crap ton of shitty sfx.
Also interestingly, diffusion and generative video will probably make this worse by dint of being so much more accessible
its sad that the informal social media bubbles we all have formally forked[0] into Red Twitter and Blue Twitter. just a direct consequence of filters, feeds, algorithms that we say we dont want but vote for with our actions.
with Latent Space I try very hard to stay practical and grounded. but this means i lose out to other podcasters who get huge hits by asking biglab people their AGI timelines (perpetually 2 years away every 2 years, upper bounds of every estimation conveniently landing within the lifetime of $current_middle_aged_generation)
I remember this incident. The screenshots from sywx are fairly tame, the dataset creator had death threats posted.
That said, I'd re-emphasize your perception slightly — "perceive it as not being _uniquely_ anti-AI" is more how I view it. I see similar sentiment on other social media too.
Oh, after reading this, I also do remember the incident. I was pretty anti-AI myself at that point but also found the backlash very confusing. Honestly thinking about my response to that is probably a part of why I've moved towards the pro-AI side.
And yeah, I think you have the better emphasis too. thanks :)
> bluesky uniquely hates AI. very toxic around it.
Not unique at all; AI hate to the point of viewing it as literally the destruction of civilization warranting being stopped by any means is extremely common on Twitter.
If there's anything unique about Bluesky’s anti-AI culture is that the opposite extreme of AI grifters hasn't moved to Bluesky in numbers that provide the balance of nuttiness that Twitter provides.
>Not unique at all; AI hate to the point of viewing it as literally the destruction of civilization warranting being stopped by any means is extremely common on Twitter
Well maybe because it's not at all being addressed by the other side
Things that are never addressed by AI grifters:
* The effect it's going to have on the sources of data it's driving traffic away from?
* What happens to the adoption curve when Stack overflow is dead and LLMs don't have a strong base of knowledge on it?
* Unlike the sewing machine, this one is primarily automating the tasks people enjoy?
* Climate change impact?
It's like wailing that the discourse around proof of work cryptos isn't up to snuff, but you don't care about the environmental impacts yourself. Well, then your discourse is not up to standard either.
Yes, if you are relying on social media as an AI information source, the absence of “high signal AI people” on a platform is a problem, though its a distinct problem from “AI hate” on the platform.
I'm sympathetic but I find it surprising that people expect rich discourse on microblogging sites like BlueSky, et. al.
There is probably an inverse relationship between number of voices on a platform and how nuanced the discourse can be. Podcasts kind of take this further by isolating the conversation to a few people who can dig deep.
Doesn't make every Tweet toxic and every podcast deep, but there's a tradeoff nonetheless.
I don't think that's necessarily true, I think it's about curation, not volume. The largest open source projects in the world have enormous inbound volume but extremely high quality discussion because of curation (I'm thinking about maintenance of Wikipedia, Open Street Map, and Godot).
This is also true on twitter & blue sky. Looking at the general feed is a completely different world from looking at specific networks.
I think the problem is that so many AI prognosticators talk about the impact of AI, without really talking about the impact of AI.
It’s easy to say “job xyz will be disrupted by AI”. Too few go the next level and say “that means that there will be almost no entry level positions for xyz, which means your kids, who are in school now, will face a very uncertain future. Here’s how you might prepare them for that future.”
Breathless pronouncements, not enough empathy, and not enough reckoning with the potential consequences. I’m not at all surprised people are turned off.
I feel this. My opinion is that folks on either side are not well informed.
Those who say AI won’t disrupt coding that much seem to use anecdotal evidence. Their own experiences, mostly. And perhaps they are using it differently than those who think otherwise. But they don’t seem interested to find out.
Similarly, those who claim it will disrupt coding a lot seem to be making premature conclusions … also from anecdotal evidence. Perhaps their work is a better use case for AI assisted coding. But they also don’t seem interested in learning more.
Sounds a lot like other polarizing topics, eh? I think that’s because it directly impacts people’s livelihood and identity … for better or worse.
>Oh, and of course, ethical considerations of technology are important too. I’d like to have better discussions about that as well.
That's the entire reason for the polarization. There's no point writing a blog post about the state of the discourse if you leapfrog over the crux of the debate.
There's a reason the detractors are ignoring the facts
If you're upset that they are acting like this, then you can't do the reverse
Talk about the environment, the good parts of life it's trying to automate away, the effect it's having on the internet and what happens to the adoption curve of new techs when Stack overflow is dead.
I intend to talk about these things. Part of why I didn't get into it in this post is because I want to hear opinions and get resources to help me think through what I think of on this, so that I can do so in a thoughtful manner.
I think that a large reason why the discourse sucks is because the public is seeing hardly any discussion using language that is specific about types and features.
Let me explain. With older technologies, the language to talk about it is well developed. I like to use nautical terminology as an example that most people are somewhat familiar with. There are a lot of terms like jib, sheets, mainsail, schooner, keel, galley etc, and a lot of people don't know what those mean. But it is pretty easy to recognize that there is a whole terminology to describe ships and the features thereof which is used by experts and which can be very specific. If one guy says the boat won't make the trip, and it's because the keel goes more than two fathoms deep, and the other guy says that of course it can because the galleon can go bireme past five hundred knots, even landlubbers will be able to figure out who knows their stuff.
But in the current AI discourse, AI is AI is AI. Agentic LLMs? AI. Non-agentic LLMs? Also AI. Diffusion Models? Al. Search engine? AI, kinda. R2D2? AI. Autocomplete? Sure, why not, AI. It's like if sailors used language that barely distinguished between specific nautical technologies. Boat is Boat is Boat.
Now, nautical terms have developed over thousands of years, and AI (whichever type you mean) is a new technology that is not fully developed. But imagine a discourse where Boat is Boat is Boat. The bosses learn that we just got Boat working, and are making plans to conquer the New World. Other people are concerned about the moral implications of conquering the New World. Meanwhile, one of the sailors tasked with this tried using Boat over the weekend fishing with a friend, and he's not sure he can paddle it that far. Another guy tell him not to worry, that there's a new model of Boat and it doesn't even need a paddle. He says at the current rate of advancement, we should have a new Boat capable of going underwater in less than ten years! We thought of coming up with a name for this new type, but it's basically just another kind of Boat.
If we want better talk about AI, we should use better language about AI. We are living in the time where the language used to talk about these things will be developed. We might find there is more agreement when we are talking about the same things.
> To be clear, I am not particularly pro or anti AI. Here’s what I currently think:
> • [...]
> • AI writing is often bland and boring but better than the average person’s writing.
Yes, that is exactly the right thing to do.
Without style prompts, a model should produce competent output with generic style. Anything that is not "bland and boring" is going to make a lot of people unhappy, and be mismatched to most contexts.
So great success.
On the flip side, it is incredibly easy to add style via directions.
The right way to request a style you like is going to take some iteration, or style samples, because style is subjective and models can produce an infinite variety of styles.
Again, exactly what we want. Great success again.
It takes time to absorb that models are such uniquely broad tools we can't expect them to match preferences without specific requests. In humans, that is done by soaking up context. Models only have the context you give them, but are far more versatile.
It’s just happening really fast. And as Gibson said, the future is already here, just not evenly distributed
Just one example:
Less than a year ago, most companies hiring for developer positions had strict anti-ai policies, meaning, candidates couldn’t (shouldn’t) use ai in their take homes nor live coding interviews. There were some exceptions of companies that didn’t fully know where to draw the line
Fast forward to today, almost all companies pretty much either explicitly require use of ai or at least expect it. Development teams describe themselves as ai-enabled, and are fully embracing ai development tools. Candidates are supposed to be up to date and know how to use the tools effectively. There are few exceptions of companies that still have strict policies, which mostly revolve around regulation (and they are solving the issue with on-premise models/services)
There's too much money at stake to be able to have sensible discussion about it online.
We saw the same thing with blockchain. IIRC, someone on the Cryptography list replied to the whitepaper with the critique that the energy use wouldn't be worth the measly number of transactions processed. If it were Bells Labs, that observation would have been followed up with two or three prototypes for fundamentally different decentralized payment designs by now.
Instead, it's over a decade later and all we have is a literal strategic reserve of hot air.
It seems that people can't grasp the exponential rate of developments here. They're stuck in the GPT2 LLM narrative. Even with the amazing Veo 3 videos this week, people are still nitpicking and seemingly unable to remember the state of the art 2 weeks ago, 6 months ago, 1 year ago, etc.
I don't mean to say that scores on evaluation metrics will remain exponential but rather the developments, uses, integrations will (e.g., web search in ChatGPT), and people can't conceive or keep track of this, and therefore discussions on the area are always behind the times.
For example, I think it's inevitable now that TV/movie production will not exist as we know it in a short time, except as niche work, like fine art in the age of digital. It's also inevitable that fully personalised media will be predominant. I think this is obvious, but yet people are zooming in on the background of essentially perfect videos to spot minor and irrelevant coherence aberrations.
Nitpicking will also inevitably become a niche hobby, like people who complain about the colour grading on a movie remaster, while the rest of the world just watches the movie and doesn't notice or care about the issues.
You are saying all of this like it's a good thing.
I don't want to live in a world where reality doesn't exist (which tools like Veo3 will absolutely be used to distort the truth), but we are speed running ourselves to that destination.
Yes I agree, I don't think that it's good at all. It's just fascinating to me that people are criticising current LLMs using information they heard about LLMs 3 years ago, when right in front of their eyes are sci-fi-like results from the field.
> I don't want to live in a world where reality doesn't exist
Perhaps the end result is that the world turns away from digital completely and goes back to reality :) We see already that some universities are going back to written and oral assessments, for example.
There is no meaningful discourse because there is no meaningful decision at stake. The owning class has decided that “AI” will be shoved into any plausible orifice and the “discourse” online is just a reaction to a decision that has already been made.
Frankly the noise being made online about AI boils down to social posturing in nearly all cases. Even the author is striking a pose of a nuanced intellectual, but this pose, like the ones he opposes, will have no impact on events.
Well it isn’t searching the web, it has a cut off date, and it takes bits of info from various sources often distorting them, hence it’s not a search engine.
ChatGPT.com (and other LLM UIs such as Perplexity) now use a Tool that searches the web if it detects that it is necessary to solve the user's question, and then uses the output of that search query to answer the user's question. This allows it to surface responses that are out of its training data cutoff date, and cite specific data sources.
This is unfortunately a Bluesky problem. I’m still using the platform, but it’s got too many people posting unsophisticated takes, either in tech or culture or politics.
It’s not about being pro-AI or anti, left or right. I just read too much on Bluesky which has me thinking “oh, you really have no idea what you’re talking about.” As Steve says, to verify that they’re wrong is trivial, and yet they’re they are.
Twitter, on the other hand, rarely has this problem if you only look at the “following” tab and curate who you follow.
Twitter doesn’t rarely have this problem if you have to specifically use it a certain way to avoid it. Find a specific way to use Bluesky and you’ll fare about the same. There is nothing that makes the users of a different platform more qualified to discuss something.
This anti-AI talk is a coping mechanism, but completely unproductive. People don't like the possible or likely consequences of AI that delivers even 10% of its boosters' outlandish promises, and instead of planning how to organize politically to mitigate them, just plug their ears and say "la la la I can't hear you".
I have a friend who is a translator, and all her translation work has dried out. All she gets now is tedious jobs reviewing and correcting machine translations. As Simon Willison points out, writing code is much more gratifying than code reviews, and I can easily imagine a future where the only work left for most human coders is code reviews of LLM-generated code, which would be terrible for job satisfaction.
The early 19th Century Luddites, skilled craftspeople and weavers understood the benefits of mechanized looms, but demanded the profits be shared equitably with the displaced workers. When the proto-capitalists, who largely overlapped with the feudal aristocracy that rules Britain to this day, refused, they engaged in a guerrilla campaign of breaking looms, and the elite establishment responded by sending more soldiers to quell the uprising than were fighting Napoleon, and hanging people for breaking frames:
Ironically the kinds of jobs hardest for AI to replace are those requiring manual dexterity and skill, often derided as "unskilled work" by snobbish white-collar academics and workers.
Disclaimer: I feel that your post is overly angry and reactive, but I would like to have meaningful discourse about it, and not just talk past each-other with downvotes.
So with that said, what if we started from a position where I accept you are correct about the exploitation stuff, and went from there?
Do you see a way back from where we are now? I don't. The genie is out the bottle, and depending on your view, that means "big tech" has won again. So what do we do? Do you think LLMs are useless, or are they not useless? Will they be useless in the future? Do you think they will be an empowering tool, or one that disempowers?
> Can a machine translate prose?
Maybe not now, but can it in the future? I'm going to guess yes. So of what use is arguing that it is rubbish because it can't translate prose now?
This is like asking me if steam power was a force for good, or the combustion engine, or computers. The answer is yes and no, but the more important point is that innovation and progress is inevitable. We lead vastly better lives than people 100 years ago. Why? Because of innovation.
I use LLM code completion in my IDE, and it is fairly useful, but not yet ideal. I use it all the time to ask technical questions rather than searching documentation first. It is extremely helpful for that task, and most of the time is correct - I always double check after it points me in the right direction.
I see a path where AI leads to the destruction of humanity, but it could equally lead to a post scarcity utopia.
You are being completely dogmatic in you view, and I can understand. If you truly believe AI is a force for evil, then it makes sense for you to rage against it. But keep in mind, no-one knows yet, and be open to the possibility that you're wrong.
"It is difficult to get a man to understand something, when his salary depends on his not understanding it."
The NUMBER ONE reason anyone is anti-AI or AI-skeptical is because it directly threatens their livelihood. Many people have ignored all development in the AI space because they don't want to admit that it's a real problem that threatens their livelihood.
The simple fact of the matter is, the most intelligent people who should be participating in discussion on important existential matters like this, are not participating.
They are not participating because they know and realize that what people see today in communications is not a discussion. They have realized that for the most part, the communication channel has been jammed past Shannon's limit, with false narratives that are not actually coming from real people. Objective reasoning and statements are drowned out by the flood of lies. Its an attack, one long predicted but ignored.
What happens anytime communications are jammed and you don't realize it? It leads you to improper decisions which have a cost in blood instead of in resources. The bigger the impact, the longer the dynamics take, the more harm there is.
Worse, the simulacra released to do these things behave as a toddler behaves, or rather quite worse as an evil malevolent person towards manipulation and total control neglecting reality.
If you've ever had an argument with a toddler, you know inherently that they are just play-acting, with a tantrum held in reserve for when they are shown wrong. The same goes for the sock puppets, with malice held in reserve targeting blindspots towards torture and psychological harm for those people engaging in goodwill seeking a long-term future for their children.
When the distance between what is said and what we know to be objectively true is an abyss, evil wins, and everyone becomes victim to the loudest monster in the room, a monster which eventually comes to destroy us all.
Evil is not just some metaphorical construct but mainly describes the outcomes that result in destruction that are entirely preventable through choice and knowledge (truth).
The people who are truly intelligent have realized that without communication there can be no response, no counteraction, it is a runaway machine, a train running along a track at full speed that ends going over a cliff into the ocean where those aboard die, and those aboard don't know it because they've all put blindfolds on in merriment. Willful induction to blindness is a very dangerous thing.
Those that can are withdrawing and preparing for the inevitable consequence of these quite fundamental dynamics. We've passed a critical point of no return and no one noticed because their vision was obscured purposefully. They didn't realize the nature of the failures, which act as a wave with no forewarning aside from long discounted details that were ignored. A cascading failure based in hysteresis.
The only people that will likely survive this are the ones that recognized and prepared, a paltry amount are doing this compared to the number of people globally living today.
Instead of solving problems, and making careful choices, the aggregate of decisions from people over the past few generations were to blind themselves and others to the problems they created. To cherry-pick education, torture the rational, obscure reality, and live the good life with front-loaded benefits through money printing, thinking they'll be dead before the consequences can reach them, and sacrificing those of lesser intellect who were incapable of seeing through the lies.
The bill always comes due.
The slow knife penetrates without anyone noticing.
That doesn’t exactly leave a lot of room for people to feel the need to be involved in a discourse about it. For one thing the majority of people aren’t all workaholics looking for extra hobby time.
The author mentions ChatGpt can search the web. Okay calling a search engine and retrieving a result has been possible for a while. Llm companies just slapped statistical response on top as the UI.
Maybe the discourse sucks because the reality of it sucks?