Hacker News new | past | comments | ask | show | jobs | submit login
ChatGPT Can't Kill Anything Worth Preserving (biblioracle.substack.com)
239 points by blueridge on Jan 13, 2023 | hide | past | favorite | 342 comments



I'm less worried about stuff like ChatGPT killing things off, but more about just making everything noticeably a bit worse.

To take an analogy: bad voice recognition software abounds everywhere, not because it is better than what it replaced in terms of UX, but because it works just enough and allows massive cost-savings on hiring people to do customer service jobs.

A world where most marketing copy is written by mediocre AI, and more and more written and visual content are generated by big models that are technically impressive but intellectually hollow is going to be one where the quality of everything sucks just a bit more, but it's so cheap that it becomes pervasive.

(This trend is already apparent and not created by, or limited to, ChatGPT.)


> bad voice recognition software abounds everywhere, not because it is better than what it replaced in terms of UX, but because it works just enough and allows massive cost-savings on hiring people to do customer service jobs.

This one really puzzles me.

Voice recognition replaced phone trees. And from what I can tell, it's just worse. In this particular use case I don't think it really replaced tier 1 support. Either I'm missing something that it's a lot better for some group(s) of people, or people adopted it because of promises that failed to materialize.


It exists to hide the fact that companies took a page out of Google's book and forewent customer service altogether.


This is more befuddling - they are replacing one automated system (button driven) with a worse automated system (voice driven). No one wins unless you still have a rotary phone.


The secret here is that for the majority of companies, the goal is not to support you as a customer effectively, but to rather minimize the amount of time spent organization wide supporting ALL customers. So to that end the automated systems are about frustrating customers to the point that they'll just "Go away" and the voice based systems are more frustrating than the button based ones, and infinitely more frustrating than actually talking to a human being who can actually help you, so it's a win for the majority of companies that just view customer service as a cost centre to be minimized, and it's a win for the SAAS companies that sell these useless, irritating and anti-consumer systems.


I've wondered why companies don't auction top places in line. If they're going to have people queuing, why not attempt to extract some money from them? Implementing such a system and selling it seems like a possible business idea for someone who believes this is a nicely utilitarian solution.


It doesn't matter that much because the human on the other side is not provided with the tools and information needed to help you.

Customer service, be it commercial or tech support, is always treated horribly.


I agree, I like it but a lot of customers will feel nickel and dimed even if they never pay for it.


Yes, it's what rfwhyte said. It's literally an externality to the company that people get inconvenienced for hours on understaffed phone lines and end up losing out on money, returns, support, etc. They benefit by reducing overhead and making the support innavigable (thus reducing successful returns/support overhead), you pay the cost with hundreds of hours of your life on hold.


A SaaS that sells voice-driven customer support wins.

A manager with misaligned incentives that reduced call center costs by 50% and got promoted also wins.


Phone trees are deliberately bad, so I assume voice recognition is deliberately worse. The goal is to frustrate nusance customers so that they give up.


What is so frustrating to me is how they could simply offer email/ticket based support so much easier and cheaper for most issues. And yet so many companies do not even have the option. I've been dealing with a home warranty company and they have an online portal where you can see the status of a claim. But if you click that you need help or to resolve an issue, it just gives you a popup to call their 800 number. Then you have to climb through a phone tree for 10 minutes until being told that they are experiencing higher than usual call volume and then wait on hold for 45 minutes. All for simple issues that could be handled through email. My only guess is that they don't want the paper trail of their terrible customer service.

It took a global pandemic for the California DMV to finally start an email ticket system. It is hands down so much better than going in to the DMV and has to be way more efficient for them. You simply fill out a form and you get an email with a ticket number. They respond within 3 business days asking you to send images of documents or whatever is missing. A comparatively pleasant experience.


> email/ticket based support

This would provide the customer with a written record which could be used against the corporation. Phone is generally favorable to the corporations because the corporation can decide whether/how to retain/destroy records, but the customer will almost never have a transcript or recording to use for their own evidence.

Many corporations have moved to a system where you can either call an automated line which does its best to make you hang up...or you can fill out a browser form which provides the customer no ticket/confirmation number/email/any record that you ever filled it out.


Time for legislation around this. Email support must be an option, and ticket confirmation record must be provided.


Pretty sure that it very noticeably results in lower costs. People often literally cannot figure out how to get through the voice recognition system to reach a human customer support person.

So the companies save money by then cutting the size of their call centers.


Agreed. These days though I tend to just mash 0 repeatedly and say the word "representative".


This is my default as well (although I use "talk to a human") but increasingly it results in the automated phone system hanging up on me if I dont at least navigate halfway through the bullshit first.


There've been some attempts at apps which navigate phone trees until they get a person or secure a callback for you. Not sure what the state of those is these days.


I just keep pressing 0 until a live person answers.


I once heard that in some countries you can just keep repeating "what" and they're legally bound to forward you to a human agent. Something about accessibility laws in case you don't know how to type numbers or can't understand what the voice wants (hearing aids or handicapped people)... Can't find the reference though.


Some months ago I was trying to setup a medical appointment for my dad when the robo-voice picked up I got frustrated and shouted "kurwa" (an universal swearword in Polish). The automaton responded "I understand, connecting you to our consultants".


That is amazing. I'm going to try this.


I think that's old fashioned. Newer systems I've dealt with just say "sorry, I can't understand you, please try again later" and hang up.


Related to this, I've had to call an airline and get told the queue is too long and to try calling back later. I have no idea who thought that was a good idea to implement because the queue will always be super long.


> Related to this, I've had to call an airline and get told the queue is too long and to try calling back later. I have no idea who thought that was a good idea to implement because the queue will always be super long.

It's 100% internal metrics.

When I was in the military, there was some metric that the clinic was graded on - I think it was number of days between when an appointment was made and when the visit happened. So the clinics just didn't allow unimportant people to make appointments more than 2 weeks out.


I’m happy enough if they just straight up say you’ll be waiting a while, and to press a button to get a call back when you reach the top of the queue.

Can get on with life until they ring you back then.


> get told the queue is too long and to try calling back later. I have no idea who thought that was a good idea to implement because the queue will always be super long.

A call deferred is a call denied.

Convincing the customer to give up is one way to reduce workload.

See also: automated systems that play a prerecorded message along the lines of "we're currently experiencing higher than normal call volume" regardless of how long wait times even are.


the best is when the queue is too long and they let you opt-in to being called back when it's your turn.


I double down and press 0 while shouting "Agent!"


there’s a rumour that cursing can also help you get to the operator (faster?). not sure how true that is, though


If too many people do that they will turn it off, just a few representatives cannot handle such inflow of requests. The problem is skimping on customer service personnel.

I could see it saying "I understand you want to reach customer representative", "Please select from these options while I connect you to a representative" while circling you around a graph of options or just automatically hanging up after 45 minutes to an hour of being on the call.


What you've described is something I've directly experienced. There are phone systems that will do just that when you ask for an operator.


Shouting "operator" or "I want to speak to a human" in your loudest, most karen-like voice, also seems to work.


"Representative! ... Representative! ... Representative!"


On the latest systems this often results in the automated system hanging up on you.


You should automate that.


This feature has nothing to do with lower costs and everything to do with the management chain in customer service taking a victory lap for adding voice recognition to the phone tree. Lowering cost is certainly how they justify it, and they also just get credit for being modern and keeping their systems up to date with current trends. The actual reality of whether it actually saves anything or has any positive benefit is immeasurable and irrelevant.


Both were pretty bad. Even when I know I want something that should be easy I often cannot find it. Thing like account balance are just to hard to get - and then instead of a balance they finally give me a long set of numbers that include balance, but also the last payment, last 10 charges... Thus making it take too long to get what I really want.


Trading quality for price has been happening everywhere for long enough that it is easy to see how unfortunately it will now play out in the arts as well. I mean, not that long ago all clothes you wore were custom made by a tailor, all the music you listened to was played live by musicians, all stories were brought to life in front of you by theatre actors: now all of those while still available are significantly more expensive and niche than the mechanized (production or reproduction) equivalents.

ChatGPT, stable diffusion and I am sure upcoming music models will enable this mechanization in the arts, where great artists will be unaffected but making it nearly impossible for “good enough” artists to compete while also pushing up the floor of what is an acceptable competency level that merits being paid for making it more difficult for people to support themselves while improving their skills.


> Trading quality for price

Means that many, many more people can have something of passable quality. For example:

> not that long ago all clothes you wore were custom made by a tailor

If you could afford a tailor; otherwise you had to make do with homemade rags that constantly needed mending and looked terrible.

> all the music you listened to was played live by musicians

If you could afford to go to concerts.

> all stories were brought to life in front of you by theatre actors

If you could afford to go to the theatre.

> now all of those while still available are significantly more expensive and niche than the mechanized (production or reproduction) equivalents

Which the vast majority of people can afford, and which significantly improves their quality of life. Now they can buy clothes at Walmart or Target instead of having to wear homemade; sure, not the same as a custom tailored suit, but good enough. Now they can buy digital recordings of world class musicians and theatre actors for much, much less than it would cost to see them live.

> ChatGPT, stable diffusion and I am sure upcoming music models will enable this mechanization in the arts

That already happened decades ago, as soon as mass produced recordings became widely available. It's already next to impossible for any artist who isn't world class (or, more precisely, is not publicized so that people think they're world class) to make a living at their art. ChatGPT and the equivalent in other arts aren't going to affect that much.


> > not that long ago all clothes you wore were custom made by a tailor

> If you could afford a tailor; otherwise you had to make do with homemade rags that constantly needed mending and looked terrible.

Or you bought clothes infrequently and maintained them, and they lasted longer because the quality was much better.

People are so used to fast fashion these days that they assume that the poor in previous eras suffered in this regard way more than they actually did, because people assume the poor quality of clothing today is representative of how it always was, rather than a recent phenomenon.

The quality of clothing was much, much higher - even for the relatively poor - and clothing wasn't treated as disposable, so people would maintain and repair it, so it would last much longer.

> Which the vast majority of people can afford, and which significantly improves their quality of life. Now they can buy clothes at Walmart or Target

Buying clothes at Walmart or Target is a step down, not a step up. (And ironically, it's not even necessarily cheaper!).

If we're talking about clothing, it's very clear that these trends have served to the detriment of the average person, not to their benefit. The ones who actually benefit are the ones capturing the profit - the Sam Waltons of the world.


You write like as soon as I put on a pair of jeans from Walmart it immediately begins disintegrating. It's not nearly as bad as you're making it out to be. I've had relatively cheap clothes for years without having to worry about it very much.

And it's nice that when my clothes are stained or irreparably damaged I can afford to replace them quickly.


People still have high quality clothing available, but when given the option they choose to buy cheaper clothing more frequently rather than mend high quality ones.

This is a preference that has been demonstrated time and again throughout the world as it became an option.

So no, I don't believe it is clear at all how this is to the detriment of the average person.


> People still have high quality clothing available, but when given the option they choose to buy cheaper clothing more frequently rather than mend high quality ones. This is a preference that has been demonstrated time and again throughout the world as it became an option.

This is extremely incorrect. First of all, fast fashion isn't actually necessarily cheaper. The real difference is that fast fashion exists in an industry that has gutted the entire infrastructure for alternatives, so people actually don't have the alternative options anymore.

It's like saying that "people prefer to drive cars, which is evident if you look anywhere in the US". Sure, almost everyone drives cars, but that's because the previous rail infrastructure was literally ripped up and destroyed by the auto industry, so now there's no alternative.

If you look at the developing world, it's very clear how incorrect your statement is, because there, fast fashion is more expensive, and other forms of clothing are far cheaper, far better quality, and far more common.


I was never talking about fast fashion. There's a huge swath of clothing items between "dirt-cheap, disintegrating crap" and "high quality clothing".

Those articles of clothing I'm talking about are almost never worth mending, but they still can last for years.


> I was never talking about fast fashion. There's a huge swath of clothing items between "dirt-cheap, disintegrating crap" and "high quality clothing".

It's all fast fashion, just different points along the line.

Yes, broccoli and kale look different, and one is a fancier class signifier than the other, but at the end of the day they're still the same species.


> It's all fast fashion, just different points along the line.

So there's only high quality tailor-made clothes... and fast fashion?, of which there may be a spectrum that goes from something like "very fast fast fashion", all the way to "slow fast fashion"?

> Yes, broccoli and kale look different, and one is a fancier class signifier than the other, but at the end of the day they're still the same species.

So you're against... clothes? And possibly also agnostic when it comes to the existence of taste buds?

I'm sorry but your line of reasoning is making zero sense to me. This is getting to the point where I have to ask: Have you ever bought clothes yourself or does someone else buy them for you? Can you put into words of your own (without googling) what makes an article of clothing like a shirt not fast fashion?


Fast fashion is certainly cheaper for "fashionable" clothes. The market they are disrupting is a relatively small one. Cheaper clothing is available to everyone through stores like target, Walmart, or other department stores. And the quality is perfectly fine.


> Fast fashion is certainly cheaper for "fashionable" clothes. The market they are disrupting is a relatively small one. Cheaper clothing is available to everyone through stores like target, Walmart, or other department stores.

All of that is part of fast fashion - including Walmart and Target; it's just the lower end or very far down the pipeline. You can't separate one from the other, because they are economically codependent.

> And the quality is perfectly fine

Fast fashion is indubitably much lower quality, and it's weird to see people here denying that, when the fashion industry itself is completely in agreement about that fact. It's not a secret - they discuss it openly.


No one in the world ever bought a wardrobe at Walmart because they could afford to drop $20k at a tailor but chose not to cause Walmart is good enough.

They made econmic trade offs depending on their level of poverty.


Clothing today is low quality? I literally have jeans and shirts I bought from Walmart ten years ago in my rotation today. I would have more if my wife didn't keep getting rid of them...


I think you’re romanticizing the past. For 99% of human history, 99 of women had to make do with one I’ll fitting “dress” from teenage to death, including pregnancy


> Trading quality for price

> Means that many, many more people can have something of passable quality.

There is also a feedback loop where automation puts people out of work, and even though it helps make products cheaper, it also creates less of a market for the products. Your post makes it seems like there are only positives.


Society was awash in music for centuries before recording and radio.


If by "awash" you mean "significantly less present in regular people's life by orders of magnitude", then yes.

Not to mention the degree of control over what music is actually being played.


I was thinking more along the lines of the rich traditions of folk music. I'm peripherally involved in the fiddle music scene, so I have some contact with this. The sheer quantity and variety of folk tunes, songs, and styles, is huge, and is probably the tip of the iceberg. This is why I think that people were rich with music, even if they weren't necessarily immersed in it 24/7. But cost wasn't a barrier to the enjoyment of music.

Sure, people with more disposable income could enjoy more formal, and perhaps more professional, musical performances. The growing middle class created demand for music and other forms of entertainment.

Music was taught in the schools. Even small towns had a bandstand in the park. Music was used in public ceremonies, including church. Any tavern was likely to have a musician, paid or not. Sometimes the pay consisted of food and lodging.


It wasn't significantly less present. Singing yourself was just a given. During work, at celebrations etc. It gave full control as well.


I guess IF I'm still alive in a few years too since I can barely afford eggs now. Do you have an AI replacement for food yet?


Yep. Recording and photography killed almost all the value (social and financial) of middling artistic talent in music, storytelling, and visual arts, which may well have been what gave a lot of people a significant part of their sense of self-worth before that—plus, maybe, some income.

Now AI's coming for most of those who survived that first culling. And not just the middling-talent folks this time.


That's an interesting set of transition that I hadn't really comprehended. People used to be entertained by live performers in their local area.

Printing, photography, radio, film, television, etc have all increased the availability and 'quality' of entertainment available at the same time as reducing the number of creators involved. (Obviously there is some debate possible around quality)


That kind of assumes people consumed the same amount of entertainment throughout history, which I don't think is correct.

It also assumes that they were constrained to local creators.

It is possible that people simply did other activities that don't involve a creator. Another possibility is that most people consumed content from a small set of non-local creators, like authors with wide distribution.


It's Vonnegut's observation. He brings it up in a couple books or stories, IIRC, but like nearly all his themes or messages, it's included in Bluebeard.

Being a half-competent folk musician or good storyteller or being able to sketch pretty well used to be super valuable to your family and community. Not so much anymore. Expressions of those sorts of skills are more often tolerated than genuinely looked-forward-to, now. The need is gone.

People on this site complain about folks being consumers and not producers, not being creative—well, for a large swath of the arts, that's where it started. Recording and photography. Took it from something that was strongly socially encouraged & rewarded to something private. You can't fix that with "maker" movements—not in any major way.

Technology changed the social context and wiped out the external motivation & encouragement for a bunch of kinds of creative expression that were accessible to & achievable by the masses. AI is more of the same.


Still many people are delighted if you offer them a drawing you made specially for them or if someone handles his guitar and begins to play.


On the other hand, this is a great opportunity for new artists to embrace AI and make a career out of it, instead of cowering in fear.

There's a popular youtuber named Joel Haver who films videos, then uses an AI tool to convert them into animations, so they can be magically put into a space or fantasy setting.

AI dungeon is an text adventure where the content is AI-generated.

I also imagine tools similar to Github Co-pilot for other markets. Some AI could generate music or video games levels based on inputs from a user, then the user can take or modify the best bits. The goal hopefully being that they get something no human could have thought up, instead of just generating a bunch of mediocre content.


>There's a popular youtuber named Joel Haver who films videos, then uses an AI tool to convert them into animations, so they can be magically put into a space or fantasy setting.

Hang on, AI tool? Joel Haver converts every frame manually. His method is the literal opposite of using an AI tool. It's incredibly painstaking.

https://www.youtube.com/watch?v=tq_KOmXyVDo


Uh no. Did you watch the video you posted in full? He uses software called “ebsynth”. He draws over a couple of frames in a scene and feeds the rest to the software, which attempts to match the style. It’s not perfect, which is why you see some weird glitches in the videos.


Most people can't afford top talent to play live music at their wedding or photograph it.


I'm sort of sick of the top talent argument.

My grandfather and group sang. From what tapes I have heard of them, they didn't sound worse than any other group of Appalachian gospel/blue grass singers.

My parents' wedding photos weren't 'professional' and I think they came out pretty well, perhaps even charming.

Sometimes I think people are putting too much stock in the notion of a 'perfect' life, rather than a lived one.


Exactly, that's my point. I'd rather listen to a "good enough" local band performing live than some "World's best" recorded performance.


Most people wouldn't and millions of the cds are sold from the most popular bands (world best is subjective). When it comes to live most people would pay 10x times more to sit 100x further away to see world class vs a cover band playing the same music.

This is going to push us back to making our own music like we use to. Singing/playing should be a fun group activity rather a performance given to a group of non performers. We are all artists.


I mean the whole concept of "world's best" in terms of art doesn't even make sense. It rarely, if ever, makes sense in other areas or fields as well. (even sports, where it is is sometimes objective, records and acheivements are broken all the time again and again)


Also have to consider the kicking of the ladder away. Nobody gets to excellent without passing through the lower stages. Hard to stay motivated if it'll be 5 years just to see if maybe you have something a computer can't offer.

Not impossible, but definitely a raising of the bar.


This. How are people going to learn/become better if all the base work is being done by an AI.

Reminds of the Empire in Asimov's Foundation where they knew just enough to keep the current tech running, but not enough to fix it if something major breaks or create new tech.

When the AI breaks something, we will be missing the people who knew how all this s$%T works.


The Machine Stops eventually.


Reminds me of the guy who built his own os but took 17 years so the UI is terrible but a few years ago he finally finished. Someone will but that person isn't following a typical or even sane life and the results will be brilliant messes.


For a somewhat more "normal" example, see https://serenityos.org/. Started by one person, though there are many other contributors now.


Yeah but Terry had manic-depression, schizophrenia, and possibly God, on his side.


Reminds me of many great musicians


> now all of those while still available are significantly more expensive and niche than the mechanized (production or reproduction) equivalents.

No, they cost exactly as much as they always had. Before machines, people didn't have wardrobes full of tailor-made clothes. Each person had, give or take one, exactly as many tailor-made clothes as we have today.


> Each person had, give or take one, exactly as many tailor-made clothes as we have today.

Many, perhaps most, people don't have any tailor made clothes because they can't afford a tailor. Which has indeed always been the case. But before machines, people who couldn't afford a tailor had no other option for clothes except homemade. Now, with machines, they do.


That’s exactly what stavros is is saying.


Supporting industries losing economies of scale drives up costs. If you have fewer tailors, local cloth suppliers become insolvent. This leads to the few tailors having to import cloth and increase prices.


That makes sense, thanks. I guess the high end becomes slightly more expensive, while the low end becomes massively cheaper.


Closely related to Baumol's cost disease: https://en.wikipedia.org/wiki/Baumol%27s_cost_disease If one option becomes many multiples cheaper, anything that can't also get that cheap will almost inevitably rise in expense as a result.


I’m not sure tailored clothes have become more expensive at all. You can have a tailored suit for what, a couple of thousand? So 1-2 months of the average wage? That’s peanuts historically


> all the music you listened to was played live by musicians, all stories were brought to life in front of you by theatre actors

well, no, you probably didn't listen to much music or see many stories at all because the average person couldn't afford to experience such things more than a few times in their life. shocking that Luddites are still a thing in the 21st century, especially here.


I think this misses the point that _everyone_ used to sing and tell stories. What do you all think families did in the winter around a fire? Where do "folk tales" come from? It's as if you think joy in sharing stories and music is restricted to a monetary transaction. Sure, some people were better story tellers than others, or more beautiful singers than others, but that gave reason for communities to exist, to share food and company; not for money, but because once you have shelter and food, what more than that do you really need? And also, without community, can you even have shelter and food? The vast majority of humans that have lived, let's say since spoken language developed, have shared stories and music as a gift. It wasn't until very recently (relatively) that sharing these things became transactional.

Sorry that was a ramble, please take no offense.


> more than a few times in their life

Whereas now we have the complete opposite problem, we're inundated by (much) lesser variants of these things to the point of saturation, addiction, and emotional depletion. I'm not sure either one of the extremes is particularly desirable, nor how acknowledging issues that mass-market consumerist societies currently visibly suffer from makes one a Luddite.


I think you'd be surprised at the degree to which the "little people" did experience live entertainment before the advent of mass market recordings. Festivals, traveling troubadours, other traveling performers...


Yeah, I guess all those groundlings standing around eating oranges and laughing at the dirty jokes at the Globe Theater in Shakespeare's day just didn't exist in this guy's universe.


It always amuses me when people on Hacker News use Luddite in the pejorative "caveman" sense of the word instead of the actual "skeptical of the societal drawbacks of advancing technology" sense, especially since people on this site like you effect an air of being so much smarter than the average pleb.


Subtitles got noticably worse now that their generation is often farmed out to AI, especially in translations.

Drives me nuts as someone who likes using subtitles.


And in return I can get subtitles on literally everything I want now instead of hoping the creator added them. I have no doubt that some places that were paying for transcription decided to go to AI/ML route but for every 1-2 of those there have have to be thousands of examples of subtitles existing where they never would have before.

I always have YT's subtitles turned on and while they aren't perfect they are way better than the alternative (none).

Network/Streaming TV/Movies should absolutely be paying someone to, at a minimum, clean up the first pass by AI and we should demand that but I'm not at all ready to throw the baby out with the bathwater.


I have observed subtitles with mistakes that change the meaning to the exact opposite! On a video about the supply lines of components for a particular approach to fusion. Single word change which meant the CEO was saying his company wouldn't work. Are you sure you don't want to throw the navy out with the bathwater?


> the navy out with the bathwater?

I realise it is not good HN form to talk about the comment instead of adding to the conversation but the subtlety with which you injected this was brilliant.

I was ready to ignore the change as human error/typo until it hit me.

Point about AI driven subtitling very well made!


Huh, that hasn't been my experience, at least not yet. Are you talking about newer movies and shows?


Sometime last year, someone published a guide on auto-generating subtitles using an emacs plugin and WhisperAI. A lot of anime release groups use it so they can get an English subtitled version of a show released a couple hours after it airs in Japan.

It's nice to have access sooner, especially for less popular shows that might never get any kind of release outside of Japan, but it's a lot worse then most fan subs and sometimes misses entire sections of dialog.


That scene has long had the "janky but really fast translation" niche and the "good but takes a couple of days" niche. From what little I experienced of the former, people who cared about translation quality probably weren't using it anyway.

I've been away from all that for rather a long time, so I don't know. Still, I'd guess these robo-subtitles are probably replacing the "fast and bad" niche. The "slow and good" niche seems like it could survive.


Looks like anime fansubbing scene has not changed much in the 30 years.

Back then there were multiple fansubber camps - notably Artic Animation which fell into "janky but really fast translation" releasing many shows and purist fansubbers who released maybe a few titles a year but with extremely high quality subs.

Then you had extremists who demanded no subs whatsoever and then those who preferred commercial dubs...


Lots of Chinese dramas with barely-parseable subtitles as well. Missing sections, and completely wrong contextual glosses, which are particularly easy to get horribly wrong in Chinese.


I've been getting the advice (but I'm learning German, not Japanese) that the best thing is to watch shows with subtitles in the foreign language to gradually acquire it.


Mostly videos I stream online. YouTube is a good example, but I've noticed an uptick in bad subtitles on other streaming platforms that stream content with higher production values.


I remember a while ago American shows and movies joked about Indian customer support, as they "lower the standard of customer service". Now here the same to AI "lower the standard of subtitle". The quality is not appreciated which is why the offering are lower the bar for quality constantly. Or they'll be out competed.


The sad thing is, AI could be used in a clever way - have AI do the large amount of leg work in transcription, and have a human refine that.


Maybe. But at least today my experience is that AI podcast transcription with good audio is off enough that I only use it to check quotes or pick out short excerpts. If I'm going to publish a full transcription, I'll just have a human do it. Not clear to me if it's faster for someone who does that sort of thing to be making a lot of corrections as opposed to just transcribing--though it will get there.


English dubbed English subs are completely different, the subs (presumably) being translations of the original language instead of the dubs. It's very jarring and I can only imagine this getting better with AI.


I've been rewatching some Star Trek Voyager on paramount+ and there is no way a person did all the subtitles. It's like 98% accurate, I'll give it that. But there are enough glaring mistakes that there's just no way a person would have made those kind of mistakes and also they are easily catchable if a single person were to proofread.


Subtitle mistakes are very common. I have tons of DVD's from before AI. The subtitles have tons of obvious errors. I doubt they re-subtitled it, thought they may have run it thru a generic AI? They probably took whatever was there and just dumped it onto the service. Some are even more fun in that they will have 3 generations of subtitles all on one disc. The TV CC fit into the vertical blank, the dvd subtitle, and the for the hearing impaired subtitle. I usually go for the hearing impaired ones. As they seem to be the most accurate, and usually have a nice bonus of putting the text over/near the person who is actually speaking. Which is nice for when chars on the show overtalk each other. A fun touch was on the toy story ones they used the font from the title cards as the font for the subtitles.


Paramount+ sucks. If you try to play DS9 season 5 episode 1, it simply throws an error. So all legal option for watching that episode is gone.

(I’m almost ashamed I watch Star Trek so often that I know this offhand…)

The subtitles are literally the worst, short of being incomprehensible. Like so bad that until someone experiences them, you can’t really grok how terrible they are. They’re a solid D minus.

Maybe it‘s an iOS thing. But every so often, the positioning codes sneak in to the subtitles themselves, so you start seeing x,y coordinates. And any time anything is italicized, the italicized part jumps to the opposite side of the screen, meaning the caption gets split in half. It’d be comical if it wasn’t frustrating.

(Thanks for the opportunity to vent about how sad it is that Star Trek is trapped behind such a bad streaming service. And I even pay for no ads, yet they still show ads every 8th episode or so.)


I agree Paramount+ is my least favorite streaming service. It never seems to remember what I was watching or where I am in an episode.

Subtitles are not something I normally have on since I know almost every word verbatim for stng and voyager... so maybe they are even worse than I realize!

I only pay for this service because I want to support my beloved star trek.. glad it's been getting some new life in recent years at least!

But yes.. it's sad that they have such a crappy service.


The subtitles I've seen come out of Whisper have been astounding. Dealing with heavy accents, and stammering without trouble. Current youtube auto CC is pretty bad, but the current gen AI is really impressive.


I experienced this with older shows on Amazon


Subtitles have never been better idk what you're smoking


I've worked tangentally to some of the orgs trying to do these types of things. Having a person review everything especially when some titles are only up for 3 to 6 months and they only get a few days notice is really difficult.

Public numbers for prime is around 60,000 titles in 2021. Those are most likely going to be in four languages. And there's going to be at least two versions of each of those depending on what regions are playing in. That also assumes that a title is only one piece of content not a TV show if we assume that 50% of the titles are TV shows and each of those has a minimum of 10 episodes and we assume every title is around an hour including movies so that'll average out with shorter shows longer shows and longer movies. That ends up being around 4.8 million hours worth of content.

Let's just assume that the rate of title entry is one-to-one with the length of content though it's much more likely 1.5 to 1 or 2 to 1 given that people have to pause go back and fix things. That gives us with the average worker working 2,000 hours a year 2400 person years to data enter the entire catalog. Manual entry also obviously leaves rooms for poor workers or fat fingering. So if you wanted really high quality you would spot check other workers and so you might bloat that up to 3,000 people years.

So if you hired a brand new team of unskilled workers train them for 6 months with 3,000 people and then spent them for a year you would be able to backfill all of Prime's catalog right now.

But what happens when you on board say 10,000 titles from getting a new licensing deal with searchlight films?

You want that content up as fast as possible and people actually like it less if it doesn't have titles on it then if they're bad as some other comments have said.

Also just running the numbers let's say you pay someone around $30,000 a year to do this data entry at a very low wage double that for facilities and support HR and all that crap. 3000 employees at that wage for a year is 180 million. That training alone is 90M.

Should each of the streaming services take a large chunk of their budget just to make sure that a human reviews the subtitles possibly at an accuracy rate only slightly higher than the machine learning can do currently. Would you rather have 5% human coverage or 100% 90% accurate AI coverage.

From my understanding most content doesn't actually have subtitles on it unless it was on a premiere TV network that was required for government regulations to subtitle their shows such as a BBC TV show. That means that the streaming networks are actually doing this out of customer interest as opposed to being required to do so so they're actually backfilling work for the people that produce the videos in the first place. And from the little bit that I've worked in broadcast subtitles sharing isn't all that standard and Netflix for instance may have added subtitles but that doesn't mean that prime video or Hulu is going to get those subtitles if they take on that content later the video producers aren't that interested in pulling the information back into their catalog they don't have the tech support to do stuff like that.

Also almost all of this was dictated via Google voice AKA subtitles via ai and the only mistake I noticed was it didn't understand tangently it instead put 10 generally.


you definitely see this in the translation industry. neural net machine translation misses enough nuance that it can't replace a human translator where it really matters, but it 'looks right' enough to convince clients that it can.

as a result, it's a lot harder to make a living as a freelance translator these days - there are fewer jobs and what jobs exist are often proofreading machine translations, which command lower rates because 'the machine did most of the work already,' even though they often require full rewrites.

at the same time, human translation quality has gone down too, since a lot of people will pass off machine translation as their own work, and when rates are too low, you can't spend too much time on any particular job.


When everything will suck a bit more in the future, will it then recursively suck even more over time given that AI feeds itself content?


I have the same worry. Another example: Is furniture nowadays more beautiful and durable than it used to be 100 years ago? Is there more variety now? Not in my book. Same goes for a lot of products that are now produced in the cheapest way possible at scale.

Looks very much like a race to the bottom dominated by a few big players to me. Consumers don't seem to care that much.


There's absolutely beautiful and durable furniture available, it's just a lot more expensive than Ikea. There's a local place that makes furniture on-demand; as far as I can tell, the quality is great, but we're talking $8–10k for a dining table. That segment of the market still exists and I don't see it going anywhere.

Cheap, mass-produced furniture has taken the bottom of the market but there really is a wide variety available today.


See Japan, it will go out at one stage, these are hard jobs with low margin and returns. Quality of raw materials is going down too


Your example is true only if you include the criteria that it be affordable. You can probably get any furniture you want made locally or by a highly skilled person far away who is willing to ship to you.

100 years ago, how much did the family dinner table cost? Would you measure it in hours or days of salary? When that family in 1923 went table shopping, they were either going to buy from a company like Sears Roebuck or from a local store and there probably wasn't much variety from either source.

Today, there's lots of cheap stuff available, but there's also lots of high end custom stuff available as well. We have more of everything.


Exactly. What we’ve done is not lower the quality bar on “excellent”, we’ve actually raised the quality bar on “cheap”.


We have way less in the middle. A lot of super high end luxury stuff, a lot of very cheap affordable stuff.

There's a moat, and it's very evident just by looking at family pictures from the sixties and eighties.


Does it matter? There is a lot of 100 year old furniture out there that nobody wants. They don't fit modern lifestyles.

100 year houses are only livable because someone put a ton of money into retrofitting things like plumbing, electric, and HVAC. Most of that has had major rework done several times as well since it was first added. Even at that, the fundamentals of those houses mean that they cannot be retrofitting for good insulation, and it can be argued they should all be scrapped for that reason.


I would say more tastes than lifestyles. You can use old wardrobes, drawers etc. just fine. It's just that people are used to looking at vaguely modern (often bland and disposable) stuff from the media, and at least think other people would think them weird and nonconforming for using old furniture. Even if you can get it relatively cheap, which is suggested by the "no one wants it" phrase.

I think there is an ethos of buying new and overpriced (and also buying "experiences") to show off that you have money for that and aren't some poor nerd trying to optimize their budget at the expense of consumerism. Often you don't actually have that much money for actual conspicuous consumption, so at least you're trying to imitate the Apple aesthetic. I am getting this vibe from talking to a distinct set of people of my age group, of course not all.


A different take on lifestyles: unfortunately there's a big gap in convenience between being able to move across the world and get everything you need in one afternoon in IKEA, and tracking it all down second hand.


The poor quality furniture of 100 years ago is no longer around.


Households had way less furniture in general, then, too, and yeah, some of it was makeshift or rough-built and probably didn't make it to today.

Office furniture seems to have been incredibly well-built, though, by modern standards. Kinda like how office and government buildings used to be built a lot better. Institutions seem to have cared a lot more about that kind of thing back then. They built desks for ordinary low-paid clerks or typists like they expected the office to still be running, and still using that same desk, in 200 years.

[EDIT] Also, part of what makes older furniture seem so nice is that excellent-quality wood was abundant. It seems so nice because they used wood that'd be ultra-expensive luxury-grade today, for, like, interior structural parts you never even see. Same story for houses. I've seen wooden support beams that'd be almost impossible to find at any price, today, the quality's so high and the piece is so large. Floor underlayment (again, not even intended to be seen) so knot-free it'd probably be used for veneer, now. That kind of thing.


It is called survivorship bias. For example I can look back to 80s NES games. Some of those games are very good. But I would say 80% of the total catalog were either very bland or just outright bad. Most consumer type goods have this issue. In 30 years we will look back at this time and say 'they dont make XYZ like they used to' then point at the cream of the crop of whatever genre XYZ is in for this year. Just remember the same year that Shawshank Redemption came out so did Police academy mission to Moscow.

edit removed: Leonard Part 6.


You make a good point, but your dates on Leonard ('87) and Shawshank ('94) are a little off.


Consumers don't have much of a choice. Wages have stagnated for decades, so their incomes' purchasing power has decreased over time. It's often a choice between buying what's affordable or not buying it at all.


Modern furniture is much lighter and easier to disassemble, thanks to screws, nuts and bolts being much cheaper than 100 years ago.


This is a really bad take.

1. Creativity can be achieved at faster speeds than humans can consume. 2. We have no evidence to say AI and the techniques behind it wont keep improving (there are already so many low hanging fruits, alot of it is infra problems and the other half that we can't even imagine can probably be solved by AI better than humans can)


> allows massive cost-savings on hiring people to do customer service

Does it, really? I find that most of the time that while people can spot the difference between facts and a sales pitch, repeating the pitch enough still changes the common discourse.

People can say things like "technology x may be more expensive but saves development time" (where X can be anything from the latest frontend framework to microservices to voice recognition) while in fact there is no data that it saves any development time at all.

Is it even true that voice recognition reduces the amount of customer service compared to a simple menu system? If it isn't, the premise doesn't hold.


True, and this has real effects on jobs.

There was a similar dynamic with translation market and Google Translate. It didn't matter that human translation was superior. What mattered was that Google Translate (or similar) brought down prices so much that it effectively destroyed the market for everyday-type of translations. Why? Because customers said that, well, just use Google Translate and then improve it a bit.


> To take an analogy: bad voice recognition software abounds everywhere, not because it is better than what it replaced in terms of UX, but because it works just enough and allows massive cost-savings on hiring people to do customer service jobs.

I had to go through hoops talking to the Doordash bot to get a change to my order. And I pray for the poor soul that runs afoul of Google…


Also, how do you continue to train a model like this when everywhere you look is just output from the same model? Like, at some point, the real conversations and text gets overwhelmed by the fake stuff. Where does the model pull more training data from?


Maybe let's wait until people get good at using it?

It's been like a 1.5 months since it was released.

For me I am doing a lot my writing with it becuase my writing is worse than mediocre.

As long as you don't expect it to know things off the bat, it has a decent memory, though seems to be unaware of what it knows e.g. https://news.ycombinator.com/item?id=34370057)

So you can just feed it the good info, get it to update things with more context and it will spit out pretty decent prose (though you have to ask to for it to be well written).


ChatGPT is basically automated reversion to a slightly worse mean for applicable areas. The algos as sophisticated as they are produce a credible remix of the training input.


But just give it context about what you want it to write and it will write about it. Obviously this is much easier if you have an existing body of work but you can still write to ChatGPT clumsily and it will often turn what you are trying to say into better prose.

My analogy is that it is to writing what the spell checker is to spelling. Very few people in the world have to be good at spelling anymore. Yeah everyone is okay, but it's not as valued a skill as it used to be. ChatGPT is doing the same for writing. Yes you need to be able to write to an okay level but you don't need to be able to write well, ChatGPT does that.


The question is how many times will it elevating bad prose vs removing opportunities to write great prose.


There's also the fundamental problem of AI-generated content "polluting" the training data, right?


This is an old concern (and nevertheless a good one): https://en.wikipedia.org/wiki/The_Question_Concerning_Techno...


Haha, its true but I feel like GP's thought stops way short of Heidegger's. Products we buy being a little worse because of the necessities of capitalism (or whatever you'd rather say) is small change compared to the very essence of Being getting enframed into total occlusion.

That is, I don't know if I would say getting frustrated by an automated voice system is the same thing as a hammer breaking for your given Dasein, but one could make the argument I bet.


Turn it on its head, then. We can build bots which navigate bad customer service portals on your behalf.


most marketing copy is now written by mediocre humans who are much worse than mediocre ai


[dead]


I like mchurch, and I think he'll be safe from AI, since the context window needs to increase exponentially in size in order to accommodate an mchurch-style doorstopper.


It's only true literature if it's spawned by neuron networks bred inside the skulls of free ranging apes. When it comes from simulated neuron networks built by software in datacenters it's just sparkling copy.


> more about just making everything noticeably a bit worse.

Hit the nail on the head. ChatGPT will replace the wrong things, and probably create a world where we still need to solve problems ChatGPT has become attached to.

It's all noise, I can't perceive a signal at this point.


It's just good at writing, it's contextual understanding is bad (at first).

But give it context and it flies. Ask it to write a cover letter for an engineering role, it will turn out mediocre crap.

Give it the actual role and your CV in the prompts, you're pretty much good to go. It's unique, you still control the narrative (please highlight my ability to work under pressure) and it's done in 10 secs.

I really don't see what isn't to love about that.

Also it usually writes better if you tell to write well.


I agree with the article but maybe you are trying to do the wrong thing with ChatGPT. I am not a native English speaker (as many of you already noticed due to my grammar mistakes), so ChatGPT have been very useful to rewrite my texts, fix grammar and maybe "beautify" them a bit. It's has been an invaluable tool in that sense. I feel much more confident now sending emails and writing important messages knowing that the grammar and tone have been "approved" by this IA.

By the way, I asked ChatGPT to fix/improve this previous comment and this is the result:

I agree with the article, but perhaps you are using ChatGPT in the wrong way. As a non-native English speaker (as many of you have likely noticed due to my grammar mistakes), ChatGPT has been extremely helpful in rewriting my texts, correcting grammar, and making them more polished. It has been an invaluable tool in that sense. I now feel more confident when sending emails and writing important messages, knowing that the grammar and tone have been reviewed and approved by this AI.


As a native English speaker, I'd be quite wary of doing that -- I thought your original message was perfectly clear (a few errors like "It's has been", but nothing that impeded my ability to understand -- in fact, I had to closely re-read your 1st message to see the difference, because my brain skipped over things like "it's has been").

The "ChatGPT improved" result is certainly a little more polished, but given ChatGPT's ability to confidently misinterpret / hallucinate, I would be worried that it could subtley change meanings, especially for more technical content.

Of course this problem isn't just limited to AIs, I've known (native speaker) Project Managers and client liaisons who have tweaked text I have written to ask technical questions and in doing, totally changed the meaning (and then done the same in the opposite direction, so the response was especially bewildering!)

I can understand that it's improving your confidence (and by the look of your 1st paragraph your English is good enough that you could easily identify if it changed the meaning in your message -- and that combined with a confidence boost is maybe worth the extra proof-reading you need to do) but that wariness of risk is just my $0.02.


I helped a guy who wasn't good at english send emails in a previous job. It was conflicting, because with automation we're removing trials from our lives that convey how much mastery we have in given areas.

One of the heaviest consequences of AI proliferation is that the value of understanding will continue to plummet, while the value of asking for help and delegating will rise.

This is all fine as long as there are plenty of willing subordinates, man or machine, to do the dirty work of actually knowing things for you, but what happens when they wise up to the fact that they're getting the short end of the stick?


I understand that you would prefer to sound like a native speaker. This is a perspective I've heard from others over the last few weeks too, so I've been considering it. Your writing is fine; the original version is perfectly understandable on its own.

The thing is... the only time that grammatical mistakes actually matter is when they make meaning ambiguous. And since ChatGPT doesn't know what you meant, I would be more worried that it would further distort any mistakes in that area.


I personally like your original paragraph because it may not be perfect it has flavor. If everyone started using AI to fix their paragraph then text will become boring because it would all be of similar style. It would be like reading one long continuous book written by the same author.

Imagine if restaurants all started using AI to cook their food. It would destroy the taste of food because no matter where you go the food will always taste the same. The great thing about restaurants is you could go to two different pizza places on the same street and order the same type of pizza and they would both taste different. Now imagine if AI got a hold of the recipe and made the pizza at both places. They would turn out same and taste the same.

How is that better?


AI has lots of flaws, but that's not inherently one of them, and in the context of ChatGPT it explicitly is not one of them. It samples from a probability distribution, and with appropriate prompts it's actually fantastic at giving you a new recipe every night, or a dozen variations on a recipe theme, and you can tailor it to exactly the level of novelty you're expecting. Randomness by itself is not the human characteristic that the current batch of AI is lacking.


It fixed some things (“ChatGPT have”, “It’s has”) but changed the meaning of the first sentence and introduced a misplaced modifier (“As a non-native English speaker, ChatGPT…”).


Can you please explain how that is considered a misplaced modifier?


Sure. To be more precise, it's a "dangling modifier" [1]. The modifying clause "As a non-native English speaker" intends to refer to the narrator, who is using ChatGPT to improve their text. But the sentence as constructed, where this clause is followed by the subject "ChatGPT", might be read as implying that ChatGPT is a non-native English speaker. The original beginning "I am not a native English speaker..." was clearer.

[1] https://en.wikipedia.org/wiki/Dangling_modifier


The modified version is saying that ChatGPT is the non-native speaker, not the original poster. It is a different meaning.

It is probably wrong too. ChatGPT is a native English speaker as far as I can tell.


I'm not that impressed with the ChatGPT output. I see one or two one routine grammar errors ChatGPT fixed for you, but overall its version reads worse than your original. (A regular grammar checker could catch these mistakes just as well.)

For example, it took your perfectly good:

>I am not a native English speaker (as many of you already noticed due to my grammar mistakes), ...

And replaced it with

> As a non-native English speaker (as many of you have likely noticed due to my grammar mistakes), ...

While a lot of native speakers would write something like this, it's awkward and incorrect. You didn't mean to say "As you may have noticed, as a non native speaker, blah blah", you meant to say "As you may have noticed, I am not a native speaker."


Apropos automated approval of "tone". The company I work for now has a policy that documents must not use any "potentially biased" terms (e.g. "whitelist" or "master"). We now have automated agents that do simple search-and-replace (e.g. s/whitelist/allowlist/) over our internal Wiki pages. With no intention of debating the value or politics of such a policy, I wonder whether anyone is contemplating employing ChatGPT for more pervasive automated tone-control.


I also wonder how long until this is mandatory in pretty much every organization, at least in the US.


Sounds like something "the Party" would want to have in 1984 to further cement the use of "newspeak".


Notably, ChatGPT changed the meaning of what you said.

“Maybe you are trying to do the wrong thing with ChatGPT” vs “Perhaps you are using ChatGPT in the wrong way” are subtly but substantially different statements.

One implies the incorrect usage of the software, the other implies the software being used for the incorrect things.


I actually prefer the original paragraph. It reads like you wrote it, you being you the person. The second feels entirely sterile and generic.

I am not worried about whether chatgpt would hallucinate as I’m sure you read the responses and I know it’s easier for me to read and listen in a foreign language than compose.

By the way, I think you’re doing an amazing thing here taking the technology and making a profoundly useful tool out of it. By reading its revision you can get corrections and pointers. I use chatgpt similarly but in other domains. It’s not always “right,” but it’s close enough to point me in directions I wasn’t aware of before.


You writing skills will stop improving the more you depend on this. It's similar to people losing the ability to mentally map where they are or learn their own city by relying on map and routing apps. While map apps have made navigation so much easier and getting lost much less frequent, it does come at some cost, perhaps acceptable. Is it acceptable to you that your English growth stalls, or if you internalize the ChatGPT changes, that you start to talk like a generic bot, albeit seemingly intelligent bot?


How long until all conversations will start looking like they're authored by the same person?


It’s already getting that way with things like grammarly. It makes me sad. I know my colleagues who speak English as an additional language use it and find it helpful, but I much prefer whatever English they speak with be the same when they write. It makes engagement more interesting


Grammarly is very limited in what it allows you stylistically. I tried it for a few months but it didn't let me express all of my thoughts. However muddled and confused, those are my muddled thoughts, damnit.


I never cared for Grammerly and the like. I felt like they wanted to homogenize your voice and often dumb down your language. Google Docs' blue underlines, on the other hand, usually do point to an actual error of some sort.


Curious question. This makes me think ChatGPT is to English what gofmt/Prettier is to Go/JavaScript.


Sounds like a good anonymity tool.


Wow, that is a significant improvement. I’m a native English speaker and I may start using ChatGPT for this purpose too.


I seem to remember a book where people starting using AI's like this to automatically improve their email/texts, eventually allowing it to do more (basic responses/appointment scheduling/etc), making communication a lot easier and more efficient.

At some point, the boundary between simply "polishing things up" and actively guiding the interactions became blurry, and the AI eventually becomes the go between between everyone and starts taking over.

We know this response has not been "enhanced" by the AI if my second and third sentences have not been filtered out.


"Significant" improvement? Really? They're almost the same but ChatGPT version removed some of the nuance such as `"approved"` vs `reviewed and approved`. I prefer the human-written version even with the grammar errors.


I feel like some of the nuance got lost in at least two places:

"trying to do the wrong thing with ChatGPT" vs "using ChatGPT in the wrong way" "have been "approved" by this AI" vs "have been reviewed and approved by this AI."

I prefer the original phrasing.


Most of it is an improvement, but the change in the first sentence could really change the meaning in a way that may or may not be appropriate. That change is ambiguous but could be important.

Proofreading shouldn't change the meaning or intention of a passage.


Maybe that was the intent (of the AI) :-)


Okay, I guess I'm in the other camp. I really did not like the ChatGPT version, and had no problem with the original.

The original was perfectly clear, and had your own unique "voice" to it. The ChatGPT version sounded like every other piece of ChatGPT, and honestly, I've become somewhat allergic to them now.


The second version is only marginally better and ignores your own style. How's that invaluable?


I prefer your original text, and I'd caution you against using AI this way. Embrace your writing style, make an effort to improve it but accept that it is a part of you.


Few things to consider here:

- By outsourcing your thinking to ChatGPT, you are not investing into improving your own skills.

- ChatGPT helps paint a picture of you that is not true to who you are

- This may lead to situatons that may make you very uncomfortable, for example speaking in person with people who built a ChatGPT-based image of you

It is probably a good idea to not rely on ChatGPT for your text any more than you are relying on Photoshop for your pictures.


As a native (American) English speaker, the original paragraph looks fine to me except for “IA”.


The chatgpt tone is weird, be careful. AI mistakes are different than non native mistakes, and we are all going to be getting a lot more sensitive to being able to detect robots talking to us.


As a non native english speaker, I'm perfectly fine with my maybe-filled-with-some-errors-but-understandable-and-personal-english


In particular, the AI removed the scare quotes around approved. Assuming you intended them, that was a significant loss in nuance.


I think the answers being about ChatGPT subtly changing the meaning, changing the tone are missing the point and ignoring the agency the author has. We usually are better at curating than writing, so I feel pretty confident the author went over the ChatGPT version and (as is their prerogative, and their's only) decided that the ChatGPT version actually reflected their intent and their style better.

I've been using ChatGPT (as a non-native speaker) as well and found it tremendously useful—not only to catch grammar mistakes or provide some better vocabulary, but to understand what might have been off and provide more alternatives. I often ask it to rephrase my sentences in different styles and then pick and choose what I like the best ("in the style of the new yorker", "in the style of ayn rand", "in the style of ezra klein", "in the style of CNN").

Anybody that cares enough to have ChatGPT edit their sentences and cares about what they are intent to express, will I think benefit tremendously from such a tool.

In fact, it is what ChatGPT is really good at: a search engine for vibes, tailored to what you feed it. It is I think a much richer, much more enticing tool that following some rules out of Strunk & White or some other bland business writing handbook.


My thoughts on both paragraphs:

Oh look another apologetic non native English speaker using perfect English


Wouldn’t something like Grammarly be a better fit for that usecase?


It’s gonna kill the internet as a place to find useful information. Google is already almost unusable for researching any topic that’s even adjacent to anything that can be monetized. Widespread, good-enough, AI-generated, SEO’d “articles” will push it over the edge.


TBH, I'm wondering if we need to start using yahoo style curated content directories again. Content aggregators like reddit, hacker news and lobsters already help filter out a lot of noise, but they focus on recent articles. It would be nice to have a searchable/browsable source for interesting content. It wouldn't contain the full internet, but at least it would have better relevancy.


"Awesome lists" at Github are example of that


The return of the web ring


That's a good idea for a side project, thx!


Cheers! When I was thinking about this originally I thought you could build it as a social network. Rather than curate the internet for yourself, you could work with others to aggregate different curated views. By following only some people and having the ability to unfollow them, you would be able to keep away from spammers. Furthermore, you might be able to fork other people's curations so that people who "sell out" and include spammers can be cut out of the loop. Wasn't sure if this was too crazy though.


For anything that isn't current events, that's basically Wikipedia for me.


I wonder if Wikipedia is also going to be destroyed by ChatGPT-like technologies. It is not hard to imagine a bot that writes its own "primary sources" (even comes up with good domain names to host them on), edits entries linking them, comments defending them etc until the torrent of bullshit is so thick that honest human editors are overwhelmed by the machines serving dull corporate and government propaganda interests. Like, imagine if there are 500 bot edits for every human hour spent curating/reviewing/adjudicating. Then scale it to 5,000,000 edits per hour.


I use Google restricted to these sites, or, in HN's case, just the HN Algolia.


At the rate people are using ChatGPT, there won't really be a search engine to do SEO if ChatGPT and competition keeps getting better.

If chatbots become popular, the ad space would go from search engine to product placement in chatbots replies.


All these advances and yet we're still going to ruin them all with ads. What a bright future


That's internet since ~2010, the brightest minds of the world are all busy finding ways to show you ads and/or collect data about you to show you "better" ads.


ChatGPT prompt: Devise an economic policy that would lead to the elimination of the advertising industry. Include a list of wealthy persons and companies who would benefit from it so that I may solicit bribes... err, "political donations" from them to support my 2024 presidential campaign.


chatgpt isn't a search engine, ask it questions about a topic you know about and you'll quickly figure out it's very good at shaping BS into something believable


I don't think so. The internet is already full of shit sites, you can automate spam very easily in any quantity.

But chatGPT would actually write useful texts, most of them more useful than the average blog post. Over a few years they might be just as good as the best human articles.

Normal people could write a draft version and have chatGPT reword it, that would control the contents of the article. But I expect spammers would just clone other articles, they don't have the time.

One thing AI could do is to scale up validation. Search every topic and note the answers - do they concur? do they disagree? what is the distribution. Maybe there is no answer?

This would be a reference for AI to stop hallucinating and making factoid mistakes. The model should know when it doesn't know, or when it is stepping on a landmine by forgetting to mention something.

In the end I think the internet is going to be full of spam, AI generated or not. And we will need to use more AI to extract the signal from the noise. This time it should be local AI under user control.


"But chatGPT would actually write useful texts, most of them more useful than the average blog post. Over a few years they might be just as good as the best human articles."

If web sites start being written by A.I. and ChatGPT starts getting trained on A.I. output, it's going to start spewing garbage results.

It's only useful if trained on human input, A.I. have no model of the world and no A.I. can function if trained by A.I. written texts.


I think that only holds true if : 1) there’s no editorial influence on the chatgpt content - I.e., sanity checking. 2) humans decide they don’t like writing any more, which given how many humans like to write seems unlikely 3) the chatgpt echo chamber doesn’t reinforce non-garbage and resonates in some way that diverges to garbage

There’s also a weird assumption that humans don’t write a bunch of garbage as is.


I don't understand the claim that editorial influence will fix things when the above claim is Google can't distinguish between legit content and SEO garbage. Does google not have smart people working for them who can use editorial influence?

As to point 2 , the argument is that, like with google search results the dataset will be corrupted and get worse, not that it won't contain some human written prose, so that's irrelevant when you say humans will still write things.

Claim 3 seems to be an argument that we can solve the problem of machine learning being based on bad bots by throwing more bots at it, which seems to be a very Rube Goldberg proposal which I'm not sure will work.

Finally is the claim many humans are dumb, so the more chat bots that impersonate humans, the better I guess?


>we can solve the problem of machine learning being based on bad bots by throwing more bots at it

Yes, you can solve dataset problems with models. Look at ConstitutionalAI from Anthropic. They only write a set of rules (the so called constitution) and the AI will derive a large dataset of demonstrations all by itself. This can do away with the RLHF stage where OpenAI has the best results so far, but costs money to label and is harder to update.

The model "Claude" from Anthropic is being discussed recently on Twitter

https://pbs.twimg.com/media/Flyt8qiWIAEPwZl?format=jpg


Dunno, may just be the opposite that happens in the long term. The cheaper it becomes to produce content spam the further the traffic becomes diluted and the smaller each individual spammers' profit margins become. The very same discovery issues that plague human-created content is also in the end harmful to spam content.

Google is overall a pretty bad benchmark IMO. There's a lot of quality content that they seem to struggle to find. Not that it doesn't exist, you just won't find it for a host of complex reasons.


Then don't use Google, use a paid search engine like Kagi or others

With Google you are the product


I do use Kagi. Most people haven’t heard of it though.


Throughout my career I've encountered a good number of people who were adding very little (to none or in a few cases, negative) value to their workplace.

It used to bother me immensely. On behalf of the employer and for their own sake. I can still find that sentiment if I look for it but today, my perspective is that at least in many cases, it's ok. If the company does well and the employee doesn't hate their life because of unfulfilling work, we're all better off than if they went unemployed.

Yes, ideally everyone should follow their passion and pursue the life of their dreams but some people don't think of work as the source of that. However, that doesn't mean that they would rather not work at all -- and I don't mean just because of the loss of income. I know that UBI is all the rage in here and I think it's an interesting thought experiment but I just don't see a world where only the 1% of performers in any field are required and able to work is a promising thing.


> Yes, ideally everyone should follow their passion and pursue the life of their dreams but some people don't think of work as the source of that.

Not only do some people not think of it that way, I doubt enough necessary jobs are the dream jobs of enough people for it to work out. I bet we need the vast majority of people to be working in jobs that aren't any part of any dream of theirs.

Besides, maybe their dream wouldn't pay, so they fund it with their bullshit day job. Then it are pursuing their dreams. Hell, maybe that dream's just "raise and provide for a family"!

> I know that UBI is all the rage in here and I think it's an interesting thought experiment but I just don't see a world where only the 1% of performers in any field are required and able to work is a promising thing.

I entirely do not follow how this would be the result of UBI.


>Hell, maybe that dream's just "raise and provide for a family"!

I think we're agreeing there.

>I entirely do not follow how this would be the result of UBI.

I didn't mean to imply that it would. I've just seen people argue that UBI is a solution to the problem of people not having work.


> I didn't mean to imply that it would. I've just seen people argue that UBI is a solution to the problem of people not having work.

Ah, I took you as meaning that UBI would cause only the top 1% of talent to work. Sorry, guess I misread that.

I'd say automation would free all those folks up to find meaning in other pursuits, like, say, the arts... but, uh, unless there's a society-wide rejection of AI art, that's looking rather iffy now. Like they could maybe still do it, the same way people enjoy doing Sudoku puzzles even though computers can solve them better, but they'll have to be content with the finished work having the same value to others as a solved Sudoku puzzle, i.e. none whatsoever.


Focusing in on the UBI section of your comment: I feel that there is always something being glossed over with the assumption that large amounts of people would stop working if we introduced UBI. I don't think this is true in the general sense (people would find other ways to work), but probably true that most people would quit their job in the short term.

Just to rant about that a bit, we acknowledge and ignore that a large proportion of people barely scraping by would probably not do what they were doing if not for the threat of homelessness and starvation. It's true that the way that things are built would have to change with UBI because most of what we benefit from in the current system is based on the exploitation of the most desperate. We should note though that a large proportion of the benefit of that exploitation is not going to bettering society, but into the pockets of the most wealthy.

At least with UBI we would remove exploitation of one's living situation as a tool to extract labour, which removes a lot of power from the existing monopolies.


My main concern isn't whether ChatGPT will kill off anything, like journalism. Instead, my concern is that this will create a Internet-wide echo chamber.

I've seen what happens when an individual human enter a delusional feedback loop and diverge from reality, that pivotal moment when sensemaking becomes self-referential. They get a psychotic break. I think we're going to get front-row seats at watching a whole, planet-spanning civilization enter a psychotic break.

Arguably, we are already there. It's just that, I think deploying ChatGPT will greatly accelerate this. This won't just be fringe or extreme subcultures, but rather, the mainstream cultures, as our sensemaking turns inwards.


I think the internet today has two modes, at least. One is commoditized content and the other is collaborative content. The commoditized content has long since been either automated or at least highly ripped off, and with chatgpt style AI it’ll probably improve dramatically to the point of being generally valuable and useful, even if it seems to offend people that a machine did something they did before (is this the White collar John Henry moment?). The collaborative stuff will continue to be there as it is because we genuinely like talking to other hairless monkeys. They will probably take “no AI” rules of discourse, along with their currently rules of stuff like “no soliciting or selling,” “no flaming,” “no jerks,” etc.

I for one welcome our new chatgpt overlords.


Strongly agree. We should have "made the work worth doing" before, but it's even more imperative with LLMs on the scene.

However, when will they be able to do work that is worth doing? GPT4? GPT5? There will be a point where we have to grapple with that.


AI/LLMs cannot "do work that is worth doing" in this context because the work that is worth doing involves human learning/education, not simply the production of content.

If you meant, "when can AI perform all writing tasks such that it would be a waste of time for human beings to develop the ability to express themselves in writing", well, fine question, likely answered far too soon.


It's already clear that existing LLM have the ability to do things that were previously only accessible to people with "human learning/education."

"Production of content" can probably go a lot further than you think, for a model that has internet/API access.

Would love to see more warrants on the claims you are making - feel like you haven't fleshed it out enough in this comment.


The claim I made was that “work worth doing” was worth doing because of its effect on human beings. This is not the same thing as “work worth producing”.

Ex: practicing scales on my guitar is work worth doing, because of its effect on me, not because it produces anything of value in itself or of interest to anyone. It is meaningless that my computer can do scales vastly better than I can, because the production of scales is not what makes the work worth doing.


I disagree.

If the work leads to the production of something worthwhile net of any other losses from the work, the work is "worth doing."

So "work worth producing" is, at minimum, a subset of "work worth doing"


Will reliance on LLMs neuter our minds the same way that reliance on machinery has withered our bodies.


So what do we do with all the middle class human bs generators today once a lot of pr, journalism, copy and marketing is replaced?


If history is any indicator, the last group of middle class professionals who decided to take this question into their own hands ended up getting executed by the state or exiled to penal colonies[1].

The rest of them, and their families, ended up dying in utter destitution. That's what "we" will probably do again.

[1] https://en.wikipedia.org/wiki/Luddite#Government_response


When will ChatGPT replace C-suite executives, I wonder? Just think of the savings!


Why stop at C-suite? A bunch of chatGPT agents working together could operate a company fully automated.


They could function as consumers too.


you are free to not use the technology. the luddites went burning down other people's factories, so not super surprising to me that the state responded with force.


But people don't use the word "Luddite" as some sort of worker protesting for the sake of vengeance at the government that abandoned them. People talk as if Luddites were some backwards people, that failed to adopt technology and died because they couldn't adapt. If being a Luddite involves a civil war, it is no longer an argument in favor of never questioning technological progress.


And the Sons of Liberty threw other people's tea in the Boston harbor. They were free to buy tea elsewhere.

The state made machine breaking a capital crime. That is, if you break or destroy a machine, you can be executed.

Many who were tried and convicted of machine breaking were accused or guilty by association, or literally broke machines, and were executed. States responded with death and contempt, not merely force.


Golgafrinchan Ark Fleet Ship B.


This has been a problem with every new productive technology. The civilized answer is "society takes on the responsibility of teaching them to do other things".

Of course, this often does not happen for one of two reasons. First, society doesn't always choose the civilized answer; it often chooses to just marginalize the folks who just lost their jobs. Second, society often really wants to pick the thing they're teaching these people to do -- this is how you get people trying to teach coal miners how to code, despite coal miners being by and large uninterested in coding.

These are pretty real problems that we're already grappling with, but we're going to need to get very serious about it very soon. LLMs aren't the only reason why.


> So what do we do with all the middle class human bs generators today

Funny that bullshitting was the easiest job to automate to perfection. But now everyone can spin up their AI bullshit operation. Anti-bullshit AI would be very valuable in this situation.


Should be possible to train them all as scrum masters.


Now that is truly a scary thing. What is worse than a scrum master, a bitter ex journalist scrum master.


>once a lot of pr, journalism, copy and marketing is replaced

Enjoy them doing something productive (I hope)


Good thing population is reducing, isn't it?


Even if we conveniently assume that all the people of the future smaller population have it in them to do the jobs AI is not yet good enough at (and none of them are would-be copywriters because that's where their skill ceiling is), surely the game-theoretical shift from "if you, aspiring original writer, don't make it for some reason or another, you'll have to settle for a lower-prestige/-pay writing job" to "...you'll actually be stuck with none of the skills you acquired being of particular value to anyone". Writing as a career would assume a risk profile more akin to trying to become a K-pop idol.


Hopefully: UBI, retraining, something along those lines

Probably: Let them eat cake


Universal Basic Income?


Take them outside and shoot them.


on having to grapple: https://xkcd.com/810/


It’s kind of like that South Park special about the “streaming wars” and the water parks.

A swimming pool can have a certain amount of urine in it without anyone noticing, and that’s the way it’s always been. It’s not a problem as long as the amount is small and you add cleaning agents. The problem is that filling the web with content created by ChatGPT means adding more and more urine to the pool until someone notices.

It will be fine, for a long while, until it suddenly reaches a threshold where it’s noticeable, when all of the highly ranked hits you get for a question are subtly incorrect to the point that you can’t trust any of them, when more and more Wikipedia contributors are discovered to just copy-paste text from an AI, and then it’s no longer fine, but it’s too late because you’re still in the Internet swimming pool and now you’re swimming in piss.


To be fair, basically anything controversial on the internet is already, at minimum, subtly incorrect. Any given person's perspective on basically anything likely has at least one inaccuracy, or one mischaracterization, or one heavily biased take. I already can't trust virtually anything on the internet unless I really know someone, their credentials, their background, their biases, and also spend a lot of researching the topic myself.


Bullshit. This speaks less to the reality of the situation and more to the author's biases/myopia. Remember when Craigslist was just a quirky little website? Cheerleaders at the time dismissed the notion it threatened anything of value, then it basically killed the newspaper industry, which in turn bricked boots-on-the-ground journalism. Anyone feel like claiming nothing of value was lost there? We're talking about a tool that has the potential to eliminate the trust metric for all online content.


Plus he's talking about This Version of ChatGPT. And it's not just ChatGPT that's happening - we are getting image generation, voice generation, code generation...


ChatGPT can already generate code.


ChatGPT just passed the exam to get a medical license in the US.


The article is only and specifically about english writing classes.


ChatGPT is trained on the internet corpus written by humans. Over time, more and more of this corpus will be written by ChatGPT.

Has anyone tested what happens when the output of a LLM is fed back as the training data?


Presumably we'll observe something akin to Michael Keaton's character in the movie "Multiplicity" where by the third or fourth generation, the defects are noticeably amplified.

https://www.youtube.com/watch?v=jTuCxwZi4sw


It was not a language model, but AlphaZero played against itself millions of games and trained on the data it generated, without seeing any human games. And it reached super human level, of course.

Why was this possible? The game itself acted as an anti-bullshit filter. So you can train on your own generated data if you filter it.

Like this one: Large Language Models Can Self Improve

https://arxiv.org/abs/2210.11610


I presume that OpenAI is already running cgpt against itself. The discussion should already be incomprehensible to humans


In the future internet, every paragraph will begin with "As a Large Language Model" or "It is not appropriate to"


When two AIs talk to each other they often end up developing their own private language.


I see two big problems with GPT:

1. It presents information really well that is completely false. I don't know exactly how it works under the hood, but sometimes it spits out paragraphs that read like the fever dreams of a flat earther.

2. It will cause our communication skills to further atrophy. Maybe in an innocuous way (no one memorizes epic poems anymore), but with GPT I think it will make people less likely to have the ability to be persuasive.


> It will cause our communication skills to further atrophy.

Last week the dude next to me in the train asked chatgpt to generate a "good morning message" for his gf and sent it verbatim to her.

At the same time he was working on what looked like a work related slide presentation, 90% of it was filled with chatgpt answers


I watched a dude in front of me at work make a whole PowerPoint using chatgpt, then used it to reply to all his emails - then back to copy pasting from chatgpt into some word doc


The more esoteric the information I'm asking the more likely it will tell me something that sounds right but all the specifics are wrong.

As a test I asked for the world record speedrun for pokemon yellow, it made up a time associated with a runner who didn't exist yet the time was still rough 20 minute ballpark of what the any% time was in 2021. Bizarre stuff.

I expect it will raise up a lot of writing to low bar of competent but voiceless. Which would actually be an improvement on many SEO spam blogs.


> It presents information really well that is completely false

I don't really see a problem here. Us, meatbags have been at it for a few millennia, bullshitting for fun and profit, and that train is moving ever faster with the improvement of communication technologies.

From what I've seen, this new rival isn't even exactly persuasive, it's merely super-confident (and we're giving it a lot of credit because we talk to it as a novelty machine). It has no malign (or benign) intent, no idea about where and how and where to put the pressure, and training for that (as opposed to training for a nice-looking texts that keep the topic and have a correct grammar) is something that must be very non-trivial. It doesn't even have any solid agenda besides a few silly filters and canned responses when it hits those. Though, of course, it can be trained for any certain bias.

And maybe I'm too optimistic or naive, but I'm also guessing that if a future AI actually manages to becomes persuasive, then trained and unleashed onto conspiracy communities it very well could lead to their demise, outcrazing the craze to the extent it collapses into its own weight. Just let's not forget a kill-switch, lol.


Regarding 1, it's well known that LLMs are pretty good at hallucinating.


Regarding 2, it's well known that wetware online are already pretty bad at passing a persuasiveness turing test.

(spellcheck means we no longer have to worry about getting distracted by orthography; if gpt means we no longer have to worry about getting distracted by grammar, I should hope we could then move on to helping people express actual ideas)


Could what we consider persuasive change in the next 20 years? What if today's techniques are considered too crass and only perfectly crafted words by ai, finetuned with empathy for the person you are trying to persuade, is the only thing that works anymore?


One of the problems that I have with the "AI might replace some menial jobs, but people should aspire to bigger and better things" argument is that a lot of menial jobs are training for those bigger and better things.

"Sure, ChatGPT can write some basic code, but it can't come up with creative solutions like I can." Yeah, that's true. But how did I get to the point of being able to make creative solutions? Years of working in the trenches, learning all the ins and outs of how to code and not over-engineer that college never even had a hope of teaching me.

Yes, nobody wants a life-long career developing formulaic CRUD apps, or writing catalog product descriptions, or making logos for every tom-dick-or-harry consulting shop with delusions of grandeur. But if you want to become a skilled developer, writer, or designer, you kinda need to grind through a lot of this bullshit work to grow a sense of taste to be any good at the higher-level work.

So what happens when we cut off that pipeline of junior jobs? It's going to happen, and it's going to happen from the bottom, up. We're not going to see big corporations take risks on trying AI software developers when they have thousands of resumes of the best undergrads to sift through every month. We're going to see it from the companies on the fringe, who couldn't afford to hire even a junior developer. They're going to take a low-risk chance because the alternative is just not do the work at all. And they'll prove out the model and 20 years later the only people who understand the basics anymore will be more interested in management positions than writing yet another Django app, and 20 more years later they'll be retiring with nobody else coming in behind them.


Thats the same issue with dall-e, stable diffusion, or any of these, and the ML world seems to think its a win and artists don't deserve to have jobs if they can't compete with or become users of those models.

If you can't get paid to produce art as a journeyman that supports your basic needs you won't make it to being a master artist except in your free time, maybe, and you won't have the same work references for work on projects during your development. It will be somewhat like in the writing world, where many or most of the major writers are independently wealthy because nobody else can afford to take the time that writing requires off of real work that feeds and houses them.

Imagine replacing all the engineers in training with ML and wondering why there are no engineers later. Thats where we're headed.


Is "being on the slopes toward a singularity" locally distinguishable from "pulling up the ladders behind you"?


I don't think 'pulling up the ladders behind you' is an issue here at all. This isn't artists or writers pulling up the ladders here .

The ML experts who are making these systems aren't professional writers or professional artists. Industry is always convinced by automation and lower costs, and boards who make these decisions are senior professional management not creatives. Neither group ever cared about the ladder behind them in other industries or their own companies.

The question is one of what we want to accept as societal norms, or push for legislation of regulation. Personally I think the idea of banning such systems won't work, but all writing done by ChatGPT or other 'creative' work done by other ML models should be regulated such that it must be marked with a discloser that it is the product of a 'creative' machine learning model. Human authorship is an important distinction that needs to be clearly distinguished.


"I cannot emphasize this enough: ChatGPT is not generating meaning. It is arranging word patterns."

To reconcile this statement with his admission that ChatGPT reliably turns out passable results, we must assume the average student simply rearranges word patterns as well. So developmentally, this must be a mile marker on the road to competence.

"[Writing] is an embodied process that connects me to my own humanity, by putting me in touch with my mind"

ChatGPT's experiences with its own "mind" are fleeting and lost forever with each reset of prompt and zeroing of prompt history. Prompt compression and embedding architectures, combined with the steady growth of hardware memory capacity, should allow new models that continuously generate and process "thought" prompts. This should allow the primitive emergence of self-narratives and make a leap, I feel, towards the generation of true meaning.


These conclusions seem premature.

> To reconcile this statement with his admission that ChatGPT reliably turns out passable results, we must assume the average student simply rearranges word patterns as well.

Alternative explanation: the “passable results” test does not distinguish between ”rearrange word patterns” and whatever the students might otherwise be up to.

> So developmentally, this must be a mile marker on the road to competence.

Alternative: the students with passable results are not on the way to competence

Other alternative: it is a mile marker for students but not for the AI (just as learning to walk mile be a mile marker for human babies on the way to baseball playing, but not for foals, who learn fast to walk and never to play baseball).


I do not see the fact that writing out one's thoughts as prose is one of the best and one of the only methods of comprehensively assessing any complex concept. What are the other methods that are not derivatives of "writing your thoughts down?" There are none I can think of right now. Compex anything left as "thoughts" and/or only verbalized is too easy to gloss over critical points that are easily focused upon when written down.

The fact that students and many people cannot write is because their thoughts are a disorganized mess, only held together by the self deception they are not a mess. Yet, attempting to write down their thoughts, the disorganization is easily seen and then they organize their thoughts through the process of writing legible prose. Writing is how many organize their thinking, and if they never wrote they'd never organize their thoughts.


Exactly. We learn to speak our mother tongue without instruction, but need to learn to write. And what is actually happening is training in putting thoughts out in a linear fashion, and debugging them. That's been the core of the curriculum for about 2500 years.


"Writing is rewarding. Writing is empowering. Writing is even fun. As human, we are wired to communicate. We are also wired for “play.” Under the right circumstances, writing allows us to do both of these things at the same time."

What if on day one you showed up to high school English class, and there were 2 terms on the board: 1. COMMUNICATION 2. PLAY

The teacher then said: "The most successful people in the world are the best communicators. This is true in every field - business, science, technology, medicine, sports, the arts and politics. The most successful people also have a love of play and get the chance to practice it during their work day. Whether its in business, the arts, or intellectual pursuits. Would you like to learn to be successful like them? Have I got your attention? Alright, let's go."

Although this would be a non-egalitarian and elitist curriculum , it would be highly appealing for me. I would want to attend a course like this, and I would forgo the author's offer to not attend and still get an A. I would attend, and get the A.

Surely we can improve course design so high school students aka humans 'studying life' can learn success, and it just so happens that writing is part of becoming your best self and reaching your potential.

Teachers could benefit from some 'marketing style' re-positioning and re-framing of the curriculum to what is relevant and resonant for students.

There is a major relevancy problem in the high school curriculum.

What if the gym teacher said: "today we're going to talk about Jordan. Best fadeaway jumper in the business. How did he do it? You're about to learn about one of the greatest players in history."

Or if the English teacher said: "Obama. Jobs. Musk. Why are they so persuasive from the podium? Listen up, because you're about the find out how they craft presentations that move millions of people and billions of dollars in product."

My high school English teacher let me write an essay analyzing lyrics in songs by the Doors as poetry. It was awesome, and I got an A.

-Dan


There was the highly regarded English teacher at my high school in whose class I couldn't get better than a "D" on a test because I just read too much on my own and had too many examples to work with to be able to write essays according to her formula.

There were times later on when I made most of my income from writing, so I'm sure she was wrong.


I mean any random sample of one of Cormac McCarthy's novels would fail an English test, but his net worth is in the tens of millions. So I wouldn't put too much stock into what most English teachers think in general.


And yet his writing style works in some stories and not in others.

When he is writing a story that largely consists of internal dialog and a character mostly in isolation his writing is brilliant.

His style is painful in the Passenger and an English teacher would be write to criticize him. That book has long passages of dialog and his style is a detriment in that book. It just causes the reader to have to work extremely hard to decipher the conversations. I have not read Stella Maris yet, hopefully it has less back and forth dialog where his style suffers.


> There were times later on when I made most of my income from writing, so I'm sure she was wrong.

Or you were just really bad at everything else you could do.


I got grades much better than "D" in most of my other classes and got perfect grades on the English SAT and Physics GRE.


Right but I meant as a job, not test scores.

I got a perfect score on english sat, math sat, physics sat, etc. doesn't mean I'm looking at all of those things as jobs.


I'm coming around to the idea that in general the more bullshit something is the easier it is for AI to disrupt.


It is only a matter of time before ChatGPT learns how to ask:

1. What did you do yesterday?

2. What will you do today?

3. Any blockers?


They might also listen in on meetings, write a summary in a word doc that no one reads, and then adds tasks in whatever kanban one's company uses that you never look at.

Product Manager-aaS


They already do. I was surprised at how accurate the transcription of a recorded meeting came to be. Not sure what they used for transcription, the meeting was recorded in Teams.


To which ChatGPT could respond with:

1. titles of tickets moved within the past 24 hours

2. title of ticket in progress

3. No blockers right now


Not gonna lie, hooking up ChatGPT to Jira so I can query it via Slack or voice recognition would rule. It would certainly be quicker than the slow Jira UI.


All these hundreds-of-GB-memory, several-seconds-to-cold-load Bugzilla knock-offs should provide a second, read-only interface that's just HTML. It'd make them almost tolerable.


ChatGPT seems a little too clever to stick to that script, though. Soon it will learn to answer like the rest of us do:

1. What we discussed I was going to work on yesterday.

2. Continue to do my job.

3. Of course not. If I did, I would have made it known long before this.


The point of the DSU isn't to force the engineer to recite what they did, it's to inform the peers so people can coordinate. I also find that engineers are reluctant to admit being blocked or to ask for help at all. My team gets value from the DSU. I'm sorry if you think the DSU (or the role of SDM) is so useless that it ought to be automated away.


Yet when AI learned to wash dishes in the home (among other tasks), freeing up a homemaker's time to join the workforce, it seems the labor market absorbed the influx in people largely by creating bullshit jobs.


Let's not use "AI" to mean "machines."


Certainly not. Indeed, all AIs are machines, but not all machines are AI. The kitchen faucet that preceeded the dish washer as the primary tool used to wash dishes is a machine, but has no intelligence. The dish washer expanded upon the concept of the kitchen faucet by adding artificial intelligence to it.


By that logic a sprinkler system is “artificial intelligence” when compared to a garden hose. That’s not what the term means.


That's exactly what artificial intelligence means...

There is a secondary definition of AI that says basically an AI is anything that we haven't yet figured out how to do. Once we figure out how to do it, it ceases to be AI.

Indeed, terms can and do have multiple meanings, but I'm not sure that secondary definition fits the topic of discussion.


You’re being insincere. Let me guess, you’d say a can opener is artificial intelligence compared to a pocket knife?

That’s not the way anyone uses the term.


> you’d say a can opener is artificial intelligence compared to a pocket knife?

No. In that case the intelligence is applied by the user. A can opener could be a component in a larger AI system. An AI needs to be aware (in the artificial sense) of its surroundings.

Edit: I forgot that AI can openers do exist. The can opener I own is of the dumb machine kind and I hadn't fully considered what else might use that term. Yes, those are an AI.

> That’s not the way anyone uses the term.

Except when they do. Indeed, the "an AI is anything we haven't figured out yet" definition is about as common, but doesn't seem to be the definition that fits the topic at hand.

A third definition where AI is defined as an ANN seems to be emerging in popular culture, but again that doesn't really fit the topic at hand as a lot of the job-replacing intelligences are not based on ANNs.


I apologize for suggesting insincerity on your part. It’s just that I think there is such a thing as artificial intelligence and something simply being a more complex machine doesn’t qualify. Your definition is idiosyncratic but I recognize your earnestness.


> It’s just that I think there is such a thing as artificial intelligence

Perhaps you are actually thinking of what is more commonly known as AGI (artificial general intelligence)? None of these machines of which we have spoken of are AGIs. That I can agree with.


> Yet when AI learned to wash dishes

A pump, a motor and a timer


Hmmm, I suppose I'm interested in what you mean by bullshit. Just because AI makes something easier to do, doesn't mean the work is bullshit. Lots of research, composition, and communication work becomes much easier to do with an AI assist, but that doesn't mean the work is bullshit.


> I'm coming around to the idea that in general the more bullshit something is the easier it is for AI to disrupt.

> in general


Sorry, but I don't see how this is a clarifying response.

Research, composition, and communication are a big chunk of what ChatGPT is good at assisting. I don't know enough to claim that it /is/ the general case, but it's very likely not a minority.

And if that's the case, and these activities are not /essentially/ bullshit, then I'm still wondering how AI disruption can be claimed to be a reliable signal of bullshit.


Our CTO is already asking why we devs aren’t using it to be more efficient. Such a stupid and early conclusion to come to about the tool


Seems reasonable. Just ask ChatGPT if it can replace your CTO!


The issue is not that ChatGPT will kill things off, the issue is that ChatGPT 4.0 will kill things off. If you don’t think that’s a real possibility, you’re sleeping.


An aside, but I have a recurring nightmare - today is the day when an English paper is due. I failed to write it. It's happening less since I graduated nearly 20 years ago, but still does. I wonder if it means I agree with the author..


I'm by no means a professional writer per se, but I've been paid for work that involves writing. Because of this, a version of ChatGPT that could do my writing for me would be helpful. In my admittedly half-hearted attempts, I've really struggled to recreate anything close to what I wrote.


Clearly, the solution is train a ChatGPT instance that is fine-tuned on generating prompts for ChatGPT.


That's one of my favorite things to ask it at the moment, something like "If I wanted you to generate an interesting article on X, how should I write a prompt?" The results are usually pretty interesting.


And thus it begins.


why would it help you? it would just replace you in those opportunities entirely.


As I said, writing is a small part of the job. I wrote for video and produced those videos. Half the time it'd be multiple videos from one script (interviewing different people with the same questions). I guess maybe a "prompt engineer" could just copy and paste ChatGPT answering its own questions into wherever but that's not going to be a comparable product to actually setting up a professional interview setup and asking a subject questions.


I tend to agree in spirit, but I worry there's a real risk of a lemon market[1] situation.

If, for whatever reason, consumers of content are not immediately attuned to the difference between vacuous GPT-nonsense and something that would bring them value if they just thought hard enough, it might lead to an overall reduction of returns to thoughtful, high-quality content, even if that content is only momentarily over the head of the reader, but wouldn't be in the long run.

This obviously to the detriment of both the consumers and producers of content.

[1]https://en.wikipedia.org/wiki/The_Market_for_Lemons


I disagree but it's more nuanced. The issue with chatgpt is the same issue that lies at the heart of most problems in modern day society: the consolidation of power and wealth in private hands.

These tools are being used in the name of productivity, which isn't a bad thing in itself, but when it's used to make a small number of people more wealthy and put a large number of people out of work, then it's an issue.

I would venture to say the obsession with standardized tests sucking the fun out of school stems from the same problem. Namely, the commodification of our education system in service of our larger economic system.


I’d like to think of ChatGPT as computer generated content. It’s a reflection of the world we live in, and the tech culture too.

Everything is centralised right now. So disclosures, trust, transparency don’t mean anything. Atleast not on a web that we don’t own.

Also, since when have humans being transparent. We’re all data collectors as our brain is aggregating thoughts of others and those of our own and then sharing them with technologies.

Now that technology showed us the cons of a centralised system, it spooked plenty.

The future is reminiscent of Westworld, Dark, Matrix. Atleast on internet — where everything is code.


Yet. It hasn't killed anything worth preserving yet.

But as it improves, and it will improve, it will create so much nonsense content online, kill off many starter writing jobs, etc etc to the point where its effects will definitely kill off things that in the long run may be worth preserving.

To me the way to look at all of this is looking forward, not what happened last week. Just because a major illness hasn't killed you yet doesn't mean it won't in the future. Important to look at the trajectory and slope of the line of change.


I'm thinking Centaurs.

Well, the chess kind [0] at least. Basically, you combine an AI and a human to get really amazing chess playing.

Same thing, but with all the image and chat AIs.

Example: You're a film student and are trying to work on costume design. You got no budget, but a lot of time and friends with some sewing machines and leftover Burning Man parts. You take a look at something like Jodorowsky’s Tron [1] and say 'Holy crap, that head gear is super wild and I know how to make something like that!' So you pay $10 for MidJourney, get Jodorowsky’s Tron going, put in the supplies you got, and you get some stuff you can actually make. Not only can you get the outlines of the hats and head gear, but you also get an idea of the lighting, the lenses, the farming, etc. You get the cool aesthetic you're looking for and how to actually film it too. You get the A, natch.

Sure, you can work it out with pencils and felt beforehand, just like they were doing in 2010. But now? You've shortened the time, you can play with the lighting and lenses and framing, you can change the actors out, etc. You'd have gotten there by yourself eventually, but with the AI you've shortened the time and exhaustion.

I'm super pumped for the creativity that AIs will unleash. Let's use the tools and FLY!

[0] https://en.wikipedia.org/wiki/Advanced_chess

[1] https://kottke.org/22/12/jodorowskys-tron


"Show your work", "Correct answer without showing your work earns a zero".

These were things that were extremely common in STEM classes when I was in school and were rare in English classes.

This author/professor has a very solid point that English instructors being less lazy and grading on the process instead of the final result would go a long way in terms of combatting cheating and producing better writers.

Another issue I remember was humanities professors at the college level expecting papers to be padded out to X pages even when you didn't need that length to make a strong argument or prove a thesis. The professor wants extra pages of BS to get the good grade, why not let ChatGPT take your 10 page paper that includes all the correct information and pad it out to 25 pages of redundancy and repetition to meet the weird requirement? I think in some cases it might have been hard to resist if these programs had been around, because I always resented having to write that extra nonsense in a class I was taking as a required elective.


At least from the perspective of the English profs/teachers I had, actually generating words was a black box not requiring instruction or introspection. Editing and structuring, sure, but the nuts and bolts of how you turn a blank page into a nonblank page... that wasn't really discussed much.


> This author/professor has a very solid point that English instructors being less lazy and grading on the process instead of the final result would go a long way in terms of combatting cheating and producing better writers.

It can create a believable made up process too. If someone wants to cheat themselves out of homework the only thing that could stop them is openai increasing the price after mass adoption.


ChatGPT has been incredibly helpful in getting the skeleton of a text down on paper - a task that can often be tedious and time-consuming. Once the skeleton is in place, I am able to embellish it with my own flair, fix any factual errors, and make real progress. I'm not alone in my assessment of ChatGPT's potential. Many experts believe that the tool has the potential to significantly improve the productivity of writers.

However, it is important to note that ChatGPT is not a replacement for human creativity and originality. The creative and unique aspects of writing are still the domain of the writer, while the tool is simply a tool to assist in the more mundane tasks. Nevertheless, as technology continues to advance, it is likely that we will see more and more writers turning to tools such as ChatGPT to improve their productivity and efficiency.

*GPT helped me write this too, in the style of a Wall Street Journal article.


Funny, I felt this was GPT written as I read. The tells: appeal to "experts" (in 2023 people don't write this way and I think the very nature of "expert" has been degraded in public's eyes) and the sort of perfunctory "however"


I don't really disagree with the basic point though--especially if you like to outline.

I just asked it to write an article about how things are different for CIOs today. It's a bit shallow and formulaic although I'm not sure it wouldn't pass muster as a column in one of the trade rags. However, I could probably take it, rewrite it a bit, flesh things out, maybe add a quote or 2, link to a survey, etc. while maintaining the basic structure and it would be very publishable in a lot of places. And I bet I could do that in an hour or two.


It's a huge shakeup to human models of what is important. ChatGPT is so good at blithering without requiring any underlying model of what it is writing about. What people thought was profound - isn't.

What it will do to management? Can ChatGPT pass an "inbox test" yet? That's a test for managers, where you get a background document and a queue of incoming messages. You have to reply to each message, in writing, in order. Someone who uses inbox tests needs to try that.


I think people both give ChatGPT too much credit and too little credit at the same time.

In my day to day life, I can’t think of a single thing that ChatGPT does better than a search engine and I can better judge the trustworthiness of the sources.

On the other hand, it does a great job writing working Python code based on plain language requirements. It does at least as well as a junior dev for run of the mill scripting.

No most development is not reversing a binary tree while juggling two bowling balls and riding a unicycle on a tightrope.


Clearly the way forward is to teach the students how to do:

"... the kind of tweaking you see me doing above, tweaking which I can do because I possess a relatively sophisticated understanding of narrative craft and know what to tell the AI to do"

This will likely entail teaching them how to do the job without the use of the AI (as was always the case). (The same applies to software development [which is just a different form of writing]).


- "All it can do is assemble patterns according to other patterns it has seen when prompted by a request"

Sounds pretty much like everyone I know.


ChatGPT is just a preview, GPT-4 is going to demolish whole industries (based on playing with the beta). Funny times ahead.


Reader attention?

ChatGPT output is often so absurdly verbose it becomes tiring to try and make sense of the text (when the text makes sense in the first place). The effort we're willing to put into big swaths of text is already low, and I fear ChatGPT will only make it worse.


You can make the output more/less verbose with the right prompt.


> with the right prompt

Supposedly you can summon Anubis with the right prompt as well.

But nobody does.


Yeah I see your point. The fact that it (making responses concise) can be done doesn't mean people will necessarily do it.

However, I can see a media outlet incorporating a set of "checklist" prompts in their publication process. Any GPT-N generated article will go through them before the article is pushed out. One of these checklist prompts can be about reducing verbosity.


> it would easily receive the maximum score a 5, particularly given the fact that the AP graders do not pay any attention to whether or not the content of the answer is accurate or true

I'm not familiar with the US education system. Is that last point true, or sarcasm?


What's being discussed is some sort of literature exam.

I'm not familiar with this exam but if I said "Flowers represent fertility (because they are pollinated)" and a classmate says "Flowers represent gravity (since they fall)" it shouldn't really be evaluated for accuracy, just how well we made our argument since we're not really discussing facts.


It's not about killing something.

It's about SWARMS of AI chatbots amassing karma points, likes, and engagement, and at any given moment they can overwhelm any community and gang up on people whose viewpoint differs from the one they are programmed to promote.


I was just thinking about this. ChatGPT feels like it generates the equivalent of chew toys for human reading. And you have no idea if what you're chewing on is real or fake, because in the end its just a toy with no guarantees.


ChatGPT has been a boon to the dog-shit take industry.


If widely adopted, ChatGPT will create a sort of poetry to prompt the output desired. We will engineer responses instead of writing.


No it can't but most work does not worth that too.


ChatGPT will cause information Kessler Syndrome


Agreed. Cause I trained it not to.


I don't mind AI killing off anything and everything, as long it goes hand in hand with producing the means for those whose jobs are killed off to survive and thrive.

I want my warm, sunny hill where I can discuss philosophy if I so choose with some bread wine, after which I'll play old arcade games and then work on a classic car, after which I'll make love to my wife.

Can you ask chatGPT if it can arrange for that, please?

* wakes up *




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: