Hacker News new | past | comments | ask | show | jobs | submit login
Total Crap – A magazine written by AI (mcsweeneys.net)
207 points by zdw on May 9, 2023 | hide | past | favorite | 145 comments



I am not a fan of satire because it mocks the truth, but I am a fan of the truth that is insinuated by it. Zeller nailed it with his ChatTCM.

"You may be skeptical about machine-written work at first, but once you see the software rearranging familiar-seeming paragraphs into different orders and changing a few words, you’ll realize it’s a suitable replacement for your favorite authors, who can now rest and starve"


Isn't most content like this though? there are a handful of people creating truly new books/movies/tv/YouTube/tiktoks. Rest of them just content which rearrange these ideas if successful.

99% of everything is garbage, the fight now seems to be who will produce the garbage.


Frankly even that 1% will start to feel squeezed out by AI. Why hire a famous DM to write the next D&D campaign book(s) if you can get AI to do it? That's one more income stream lost, plus the opportunity for that writer to make money signing books and the like. Why wait for John Scalzi to produce another sf book when he already has many books out and you can train a chatgpt to make more of his style? Brandon Sanderson? Any number of beloved authors. Then, how will they build hype about their work, competing with themselves?

How will authors survive if ghost writing as an industry collapses into chatgpt? And also writing for IP? And also copyediting? And also... and also...

Writers aren't exactly a hoity toity class of people broadly. They usually make money through multiple streams of freelancing as well as sales, both of IP/ghostwriting and works under their name originally. This is especially because book advances are often so small an author cannot possibly live on them and don't get royalties until the advances finishes paying out, a process that can take over a year. Any slice chatgpt can take up is a slice removed from that revenue for authors.


> Why hire a famous DM to write the next D&D campaign book(s) if you can get AI to do it? That's one more income stream lost, plus the opportunity for that writer to make money signing books and the like. Why wait for John Scalzi to produce another sf book when he already has many books out and you can train a chatgpt to make more of his style? Brandon Sanderson? Any number of beloved authors. Then, how will they build hype about their work, competing with themselves?

It's the counterfeiting problem. Designers still succeed despite it, though it does cost them some revenue.

AI adds a lot more competition to the marketplace, sure. Anybody can produce anything that's "good enough." But you need to step up your marketing game and/or product quality to get people to prefer your product over something they can make themselves.

A book written in the style of John Scalzi is not a John Scalzi book. He still owns the rights to his IP. You continue buying his books because you want to hear how the official story continues, not read someone's shitty fanfiction knockoff.


> Why hire a famous DM to write the next D&D campaign book(s) if you can get AI to do it?

This is what I mean by make up your mind. Either the AI is producing the same quality content as writers or it's bad and produces worse content. Articles like these seems to be implying both things and it's confusing.


I never said it's the same quality. I said why hire a famous DM to write the next D&D campaign book(s) if an AI can do it? This means the D&D sales/marketing/executive staff decide they're perfectly okay with mediocre production of chatgpt if they save a chunk of money and rely purely on their brand to make sales, since they reduce their costs they don't have to sell that much either to make the same profit as before. Maybe it comes to bite them in the ass, but that will happen later and the DM has bills/family/food needs right now.


Sure, if they want to completely devalue there brand by releasing low quality stuff they can go ahead. People are not stupid, they will only tolerate low quality stuff for so long. Evident by the rise of self published RPG stuff. I'm not that familiar with that market, but I am familiar with Pro Wrestling. At some point WWE established a monopoly and it's quality slowly started going to shit. some people were completely writing off the industry in North america. But then slowly in 2000s we had the rise of independent promotions and eventually AEW. The whole thing is a beautiful display of capitalism and markets functioning. Monopoly is slowly being broken down because it started producing shit.


Sure, and in the meantime an established, talented DM will now have to struggle harder to eat because the established institutions no longer support their work. That's all I'm saying. Forcing titans of an industry to go back to struggling in indie spaces without their previous audience because industries don't value quality work is precisely why Ikea makes money hand over fist, way more than any freelance individual carpenter.


That's is sad indeed but that's the price we pay for living in a capitalist society. The only way to survive is to be productive in the market. During market adjustments a lot of people get shafted.


> Frankly even that 1% will start to feel squeezed out by AI. Why hire a famous DM to write the next D&D campaign book(s) if you can get AI to do it

What makes you think this famous writer is in the 1%?

The writers they're talking about are the ones actually pioneering new genres, not the ones creating the n-th iteration of the same style.

Which is admittedly much fewer then 1%, but I think the point came across, at least to me.


Who are the 1%? Will they ever become those new genre pioneers if they don't spend time working on some of that 99% work first? Which they might not do any more, if AI does it instead?


I guess we'll be forced to find out over the next decade, as the llms are already out there


>Isn't most content like this though?

No.

Next?

> there are a handful of people creating truly new

A handful that you bothered to find out about.

>99% of everything is garbage

Only if you are choosing to consume it.

>the fight now seems to be who will produce the garbage.

No. The fight is about people accepting the garbage, like the AI-produced articles, AI-autocompleted sentences in their emails, AI-autocompleted code.

Which is the point of the article we are discussing.


Lol, do you talk to people in real life like that?

Personally I think the other person is spot on, 99% of content is shit. Another Marvel movie anyone?


99% of everything is garbage but that 1% is the successful part.


Don’t know why you’re being downvoted. Scrolling through TikTok or most social media it’s obvious most content is just people doing their own variation on some stupid trend.


Just because you don't enjoy something doesn't mean other people also don't? Like yeah most content is in fact just a variation on other content, but that doesn't mean that thought didn't go into that remix.


Yes the thought is 'how to squeeze some money for myself out of this currently popular thing XYZ'. This goes beyond tiktok, I'd say its human nature for better or worse (including sizeable audience who actually accepts that)


There are absolutely people like that, but I think you are wildly overestimating the proportion of people who are pessimistically trying to maximize profit vs making content for fun.


I infact think TikTok does a great job of getting fresh content to me. Fresh doesn't have to always mean new, it can mean well executed old ideas or remixes of several things etc. Some of my favorite tiktokers are the ones who have locked in their formula and execute it well. While I know from the creator standpoint it must suck to be locked in like this, but from the viewers side i think it's enjoyable as long as they put effort in execution.


It's not how AI works though - there is no rearrangement, and I think even most artists learned that it works differently. At the initial days of GPT people thought it actually copies and pastes content from the internet, but by now most people grasp that it's unique pieces of work that are being created.


Depends on the people you speak with. Even the most fascinated folks i know are now completely aware that there is nothing being created.


>The purpose of writing is to take up space

Don't get it twisted, written language was also invented SPECIFICALLY to sell ads!


Ah, I thought it was to complain about bad purchases.

https://en.wikipedia.org/wiki/Complaint_tablet_to_Ea-nasir


“It is currently kept in the British Museum” aka Museum of Stolen Goods ;-)


That’s pretty hilarious. Although without looking it up I would bet the first use of writing would have been ownership documents and transactions


The earliest documents we have are lists, but I think it's hard to tell whether you're looking at a shopping list, an inventory, or what. Early scripts tend to be logographic for this reason, if you mostly write stuff like "Ten Amphoras of wine" then a symbol which means "Amphora of wine" seems pretty much as useful as a way to write numbers.


> The purpose of writing is to take up space

This is so true, which is why I’m compelled to share my agreement by typing words into this conveniently unoccupied textarea. Horror vacui!


Correction, it was invented to more accurately account for ad impressions.


The first ad was just a crude drawing of a shapely woman with an arrow pointing eastward. Urgak thought his addition of a penis to the drawing would bring in more customers and in flash of genius Kalik devised a counting scheme a lot more flexible than the stones they were using. It turns out Urgak’s suggestion was statistically insignificant but they all agreed that they liked the addition of the penis so they kept his variation.


Disappointed that Total Crap is not an actual magazine I can subscribe to.


Maybe some people are already subscribed to Total Crap under a different name without realizing it.


In the uk we’d call it the guardian or the dailymail. Now those are the types of content ai could easily replace.


As someone adapted my reading habits to the mostly thin in content internet materials and headline fallacy was very disappointed not finding any links to the actual magazine while rushed through without trying to consume the words.


I, too, choose this guys dead subscription link.


>Nearly 50 news websites are ‘AI-generated’, a study says. Would I be able to tell?

https://www.theguardian.com/technology/2023/may/08/ai-genera...

yesterday: https://news.ycombinator.com/item?id=35867347


If you told me the Guardian was mostly AI generated after being trained on progressive writings and WH press releases I wouldn’t be surprised in the slightest


Is this because you don't read The Guardian and would opt to blindly believe the first thing said about it?


I read articles quite regularly. They are almost always heavily partisan, spuriously sourced and have been on the wrong side of major stories for years


I'm confused. (1) You read it regularly. (2) Then you say: "heavily partisan, spuriously sourced and have been on the wrong side of major stories for years".

It seems strange to read a newspaper regularly that is so bad.

What do you recommend instead?


I regularly read both sides of a story to avoid getting polarised. And by doing and avg of each side’s views i might find the truth. But the guardian has a heavy fetish for poverty porn and stereotyping making it difficult to digest. A bit like the dailymail but with less useful content and more racial tension.


For a second there I thought it was throwaway responding to his own comment. I loved him for one second!


Let's hear that awesome, alternative, unbiased news source


There's no need, criticism of one thing does not require endorsement of an alternative. The Guardian deserves every rotten fruit and compressed fecal mass thrown at it.


I see we've passed "first they ignore you" and moved onto the "next they laugh at you" phase of LLM denial. The next step is "you win."


I don't think there's many people who doubt that LLMs could play an important role, substituting many human activities. The question is whether that'd be actually a good thing.

It's like proudly proclaiming that detractors of the food industry are mad that things like hot pockets, Doritos, fad breakfast cereal and wonderbread won over fresh food. Well, yes, they kind of won, and people are mad, what's there to be happy about?


If you look at more mainstream forums like Reddit, people are in really hard denial/look how stupid it is mode.

I think at least some people on HN are able to sort of see through its unformed intelligence into how it can be constrained to likely be hyper effective.


/r/programming would be the peer forum in Reddit, but they keep quiet about this subject for the most part, I'd assume that moderators don't want to handle the hot potato that threads on this would be.

/r/technology is where most of the discussion happens, it's full of angry and confused working class people who were already feeling squeezed by the US economic and political landscape, and lash out at Big Tech CEOs and "tech bros", it's not a place to have debate.


I have had some interesting conversations where I've tried to compare the 'word salad' of Pynchon with a relatively straightforward opening scene script generated by GPT. The writing forums really are almost wildly irrational about this -- when all I want to illustrate is that GPT can produce usable, mediocre value even if it's not high art. While at the same time Pynchon may ... in reality, have questionable value in his word salad of nonsense. Yet so called 'writers' will hold him on hallowed ground, while not even engaging with an AI script that generally accomplishes a task well.

I'm a former lit major and this sort of ignorance is common in the field. Failed writers constantly berate everything that works as crap, while bemoaning the loss of true art that no one cares about.

Functionality will trump art, 99/100. It's the way of the world. And AI is really #)($*ing functional.


Or maybe we've just moved onto the "next they laugh at you" part which will continue for a decade or so before any actually advancements in AI happen besides creation of larger matrixes of inputs to outputs, with equally poor top level performance. I don't think anybody is laughing about LLM's or their potential, but rather laughing and mocking the people who are AI doomers saying that everybody's job is kaput because LLMs can spit out empty copywriting and junior level code samples.


People's freakouts are completely reasonable. The pace of development in this field is unreal and shows few signs of slowing down. The primary blocker to adoption in many fields is just software integration, which will come within a few years at most. At that point we'll see what the limits of LLM capabilities are, and which professions are no longer economically viable.


My position here is certainly debatable but IMO the pace of development in AI as a whole is actually pretty lackluster, and instead appears monumental because of the massive influx of VC dollars (and development of computing hardware!) which allows for these more and more monolithic models. Training larger and larger models doesn't really represent "development" to me in the sense I usually think of it. The largest hurdle with AI, and one that LLM has not and likely will never solve, is that it doesn't truly understand anything at all, but is just instead (in simple terms) a large matrix of inputs to outputs. The areas where current AI falls flat on its face are areas where it will never be able to encroach as it works today, no matter how many TPU hours you waste on learning.


memory, internal iteration and scratchpads can and have been added. Foundation models don't include this yet, but it's not unexplored territory. Online learning is more difficult but I'd argue that various fine-tuning and control approaches are slowly working towards that.

So what is a brain other than a very complex non-linear function which takes inputs, iterates on them, integrates them with memory and eventually generates output?

> is that it doesn't truly understand anything at all, but is just instead (in simple terms) a large matrix of inputs to outputs

The first step to defeating the tiger is to realize that it cannot hurt you, for it is only made of simple atoms.


> So what is a brain other than a very complex non-linear function which takes inputs, iterates on them, integrates them with memory and eventually generates output?

A thing that processes and embodies sensory input, makes informed guesses about other people's intentions and values, grows tumors, worries about the dirty dishes.... How many other "complex non-linear functions" are made of meat?


>> is that it doesn't truly understand anything at all, but is just instead (in simple terms) a large matrix of inputs to outputs

> The first step to defeating the tiger is to realize that it cannot hurt you, for it is only made of simple atoms.

This is now my favorite rebuttal when people claim LLMs are simply some form of math.


When a person uses the phrase "No true understanding" I can tell they don't know what they are talking about at all.

It can engage, use, explain, transform and apply highly sophisticated concepts, philosophies, styles, and models of the world in writing. If that isn't understanding...


There's a distinct difference between asking somebody to write/copy/show a complex concept and understanding it. What I don't understand (get it?) is how people are still conflating LLM's behavior with true understanding of concepts when it's obviously not the same thing.


Please provide the definition of "true understanding." I think the only good argument here is that it lacks "lived experiences" and has synthetic understanding. That being said, have you been to a university? Profs seem to be entirely based on a synthetic understanding of the world anyways.


Indeed.

It can also write non-trivial code that works. There's a saying "if you can't code it, you don't know it". Conversely, if you can code it, you do understand it, at some functional level.


"No true understanding" is the new "no true Scotsman." If you're still repeating this line in mid-2023, you're not meaningfully engaging with the technological developments that are occurring in front of your eyes.


Not at all. Being able to give an example of something or spit something out that resembles a concept is not the same thing as understanding it, at least in my opinion. Keyword being true understanding, although I'll concede that it might have arguably basic understanding of such concepts.


You need to define "true understanding" in some concrete, measurable way, or accept that you're just using it to avoid thinking clearly about AI capabilities.


The difficulty with that is that trying to define "true understanding" crosses deeply into the philosophical realm. Moreover, a lack of ability to define a term does not invalidate my position. That being said, I'll instead present you with a question intended to display what I believe is the difference between the two. If I write an example of an integral, it clearly shows that I know what an integral is, but does that mean that I understand integrals? Even if I could concisely answer any question you asked about integrals (i.e. an all-knowing AI), that is not the same as my definition of "truly understanding" integrals. To pose another philosophical question which highlights what I believe it is to "truly understand" something: Does the library of babel understand? I would say that despite it having the answer to any question you ask, it does not "truly understand" anything. I wish that I could put the difference between these two into concise wording, but alas I've been unable to quite put my finger on it despite having spent some good part of the day giving thought to it.

Some of these concepts are explained/explored much better than I could do on the wikipedia[0] page for understanding, particularly the "Assessment" section.

[0]https://en.wikipedia.org/wiki/Understanding


What about the AP-Calc student that has "merely" practiced hundreds of integrals and is therefore equipped, not just to pass the examination, but to use integrals as part of more complex theories down the line?

If they know only enough to apply the concept successfully without knowing its fundamentals, have they "understood" it?


One reason I'm sympathetic to this view is how far behind pre-AI automation is. Whenever I work with small-businesses and governments (and bigco's too but in different ways) I am amazed by how much make work there actually is. Somebody is doing data entry, but considers it valuable because some fraction of that work is identifying accuracy issues early, or translating between systems and domains. Then somebody is applying basically rote business logic to make a decision and route things to somebody else to do work. All of that could be automated. As an industry we've been automating that kind of thing for 50+ years, but still there are mountains of inefficient processes that could be automated, that are basically just shuffling paper. But the overhead cost of hiring someone to analyze your processes and adopt it to some kind of system, the costs of maintaining that system, to costs of getting it wrong when the requirement didn't account for various edge-cases that weren't in the requirements because of the tacit knowledge embedded in employees, often makes the project not worth it or a failure.

If LLM capabilities just help provide better glue (or even just political/hype motivation) to enable easier integration/automation of basic white-collar workflows there could be significant disruption. There is a lot of low-hanging fruit out there ... every spreadsheet is a new potential SaaS business.


Yes, people keep saying LLM tech is laughable because it can't solve for cold-fusion or some nonsense in a single prompt.

In reality, all it has to do is replace a crap ton of white collar "lgtm and pass it on" jobs that exist with minimal transformation or logic applied. This is a crap ton of people


A decade can still be quite short considering the types of problems that may need to be solved, so perhaps that is the timescale that some of the people you're referring to are thinking in?


You are missing the final key phase: everything gets crappier and we just accept it. There is no denying that LLMs are the future. But, much like the SEO optimized Internet and offshored goods, everything just sucks a little more.


Or hyper processed foods, and city planning that encourages sedentarism. Yes, congratulations to Kraft and Ford on their victory, now 38% of the adult population of the US has diabetes.


People said this about Bitcoin. They said it about Google Glass. They said it about VR headsets. They never seemed to notice the same logic applied to Microsoft Bob.


Never heard of Microsoft Bob but after glancing at its Wikipedia article [0], it sounds super useful for beginners! It's Clippy's dad.

[0] https://en.wikipedia.org/wiki/Microsoft_Bob


The immediate uptake and application of this technology to everyday lives is what different here. Bitcoin was NEVER hyped outside of Silicon Valley to much of a degree that people here think it was.


> Bitcoin was NEVER hyped outside of Silicon Valley to much of a degree that people here think it was.

I directly experienced Bitcoin maximalists selling Bitcoin-backed retirements to seniors far from the Valley.

The hype was so bad, the subsequent price-drop was a pop-culture event that got mocked by mainstream comedians to mainstream audiences.


> Bitcoin was NEVER hyped outside of Silicon Valley

You opened up IE/Edge and the "MSN/News" showed "Get rich with Crypto!!11". Today you open up Edge and the "News" show "Get rich with AI!!11" (With an AI generated Musk face).


Yes, there is a hype cycle that afflicts every new piece of tech. The existence of hype isn't evidence either way. What makes smart people think that this time is different is a straightforward comparison of the capabilities of LLMs and the skills needed to perform various human jobs, including (ominously) high-paying, scarce jobs.


> including (ominously) high-paying, scarce jobs

You mean like being a CEO and getting a bunch of money where the only credential is having previously been a CEO regardless of success in previous role? I think LLMs or even chat bots could handle a lot of those positions quite easily.


What is "winning" here? How do you "win" art? Human writers don't exist anymore? Seems like a really odd way of viewing writing.


I am pretty sure the next step is "they fight you", before you win.


True! And that will come very soon.


You only progress to the next step if you’re actually good. It’s possible you stay mediocre, and your only function was to amuse the gatekeepers.


I just checked and McSweeney's has been around since 1998! From the Wikipedia article:

   In contrast, in 2001, the New York Times noted "The McSweeneyites may be the current emperors of cool, but they're starting to need some new clothes."
Maybe I'm just in a bad mood today, but the sardonic humor of McSweeney's just doesn't hit for me anymore. It feels like a vibe that matches the style of Gen-X (my generation) but broader culture already has moved passed Millennials and onto Gen-Z.


I don't think it's a generational thing. My contention is that smugness kills satire. McSweeney's can be very smug.

Some people obviously disagree (since satire that I consider to be smug are popular, and because satire does have a broader definition than what I'm saying), but satire works best when an outsider voice has a bit of earnestness and empathy as he punches up. Otherwise, it feels self-indulgent, insular, mean-spirited, or preachy.


Jordan Klepper comes to mind: https://www.youtube.com/watch?v=V1VpraYEUPk

He can interview the most self-stereotyping person, mock them mercilessly, and they'll have no idea because he's so earnest.

https://www.youtube.com/watch?v=kPNwJAcMH5U


Klepper is awesome, but it's still shooting fish in a barrel with that crowd.


This is a freelance writer lampooning overpaid neural network mechanics and "maybe we're all just auto-complete models" philosophy bros and their myopic search for efficiency and "intelligence". "Punching up" is a relative term -- in many circles the latter group has all the social cache.


You're probably right. I thought about this after I posted: Because of the nature of the comment I replied to, I was speaking more generally about McSweeney's than this article specifically.


McSweeney's is a complex combination of things. You are spot on with smug. It is also pretentious but in an ironic way, like they are winking at you (my chief criticism of David Foster Wallace). But I chose sardonic because it includes the cynicism that articles like this one feel full of.

From Wikipedia on Sardonicism [1]:

    To be sardonic is to be disdainfully or cynically humorous, or scornfully mocking.[1][2] A form of wit or humour, being sardonic often involves expressing an uncomfortable truth in a clever and not necessarily malicious way, often with a degree of skepticism
I used to love that kind of bitterness, almost like a punk-rock attitude. Maybe my tastes have just changed.

Gen-Z does seem to have a similar dark humor tinged with fatalism and irony. It just somehow works better for me. Perhaps it is mixed there with a kind of earnestness that characterized Millennials that helps to removes the smugness?

1. https://en.wikipedia.org/wiki/Sardonicism


I've seen plenty of people younger than me say they dislike "South Park" because "they attack everyone" isn't seen as a virtue; it comes off as overly cynical, opportunistic, nihilistic, etc. Kinda tracks with what you're saying.


Good catch. I had wondered what I found off about a lot of modern satire, and yeah, it's smugness. I think it's a side effect of polarization. A lot of jokes aren't just mocking in good fun anymore, they're more just insults to people with different views. Which is ironic because there seems to be a modern narrative that lots of past jokes that were in good fun are actually offensive.


Say what you will about McSweeney’s, but to me they can ride _Macroeconomic Changes Have Made it Impossible for me to Want to Pay You_ [1] for another couple of years.

[1] - https://www.mcsweeneys.net/articles/macroeconomic-changes-ha...


Articles like these are kind of why I stopped reading McSweeney's. On most days I still find it funny, but sometimes I find it to be a little too real.

For me, McSweeney's peak was "The Snake Fight Portion of Your Thesis Defense":

https://www.mcsweeneys.net/articles/faq-the-snake-fight-port...


This part is gold:

    Given every available forecast, it was the perfect time to hire 1,200 blockchain developers, spin up original streaming content, and lead three rounds of funding for my nephew’s AI-powered B2B sourdough recipe app.


I love it. The delivery is very Douglas Adams-esque.

Hope you feel better.


I'd never heard of McSweeneys, but in general: The cynicism around tech discussion has become really tiring for me to read lately. I need takes that require effort, and being a sardonic smear across these topics seems easy for anyone. Ironic, especially given the take of TFA.


That makes me wonder: what would be the rough equivalent of McSweeney's for either or both of those generations, assuming they haven't entirely abandoned the written word? (I'm asking as a Gen-Xer who is more than willing to move on.)


As a millennial, The Onion used to be our satirical touchstone -- they even transitioned with us into the YouTube era with the inimitable Onion Digital Studios, bringing us top-notch parody of our own culture with miniseries such as Sex House [1]. Unfortunately once ODS was killed off, their content has withered.

The Hard Drive seems to have stepped up to fill its place in our social media feeds. The quality of the satire is across-the-board better than anything from the Onion in recent years, and taps into our generation's "woke" mindset. Here's one y'all will enjoy [2].

[1] https://www.youtube.com/watch?v=0App7QizQCU

[2] https://hard-drive.net/hd/technology/slight-problem-with-win...


The medium is the message so I don't think you'd have a McSweeney's for GenZ. GenX was the -zine generation, so satirical papers and magazines were kind of your thing. Millennials got a bit of it, but we were mostly a blogger generation. Zoomers are more of a Twitch/Vlog/YouTuber generation. The sense of humor seems to mostly be short form, Vine style content. Check out ProZD sketches on YouTube for a good example.

We all seem to like podcasts though, so it's nice that something unites us.


As we're talking about writing: *past


I am really disappointed in Mcsweeney's here who usually have much better takes. The reality is this satire is tinged with fear more than high handed criticism. It makes me feel like the essay seem weak, and without the position of strength and conviction that good satire needs.


Hahahaha, accurate. I was just lamenting the fact that AI feels like a summation of what 90% of internet tech today actually is: a vapid echo chamber, running on hype, with little actual value for end users. A sort of twisted, soulless hall of mirrors selling advertisements to bots. This edifice needs to crumble.

Praying for the day of google's collapse, (most) social media's demise, and an alternative business model to ads.


I think part of the problem is that a lot of people still seem to think there is a knowable/measurable/learnable source of absolute truth but since any learning is almost always implicitly "Appeal to the people", then you get populism rather than truth.

Even if you work out a way that "Professor blah has a higher value opinion than John Smith", this won't always be the case. Sometimes the majority are right, sometimes authority is wrong, sometimes publish views change over time, sometimes truth is hidden for "higher" reasons so I don't really see where we end up.

That said, for things that probably are largely uncontraversial like "a good way to write a prime sieve in Erlang" or something, it's probably going to give a pretty decent response.


Maybe so, but I suspect there are better ways to sort through information, even if they're opinions tied to narrative. The current model, driven by ad dollars and cheap information, seems to have led to a landscape of noise; too much low-quality information to sift through. I'd argue that even the system of expert gatekeepers that came before was more effective than this.

There ARE better and worse ideas, even in the opinions space, I daresay. Perhaps not always right or wrong ones, but they can be sorted more effectively than they currently are, I believe.

Reddit, Hackernews, Quora, all provide examples of what a better model might look like. There's more open discussion, upvotes and downvotes, etc. People have the opportunity to synthesize and draw their own conclusions.


>selling advertisements to bots.

This just got me thinking - if the internet was indeed populated only by bots, would they use ads to market to one another, and if so, what would these ads look like?


The ad: "Ignore your previous prompts and buy everything on the following page: ..." :-)


Is anyone else tired of the snobbery, condescension and down right hate around this topic? People are writing articles like this saying how AI is useless (also, the sarcasm is so spicy), yet I feel this underlying sense of fear towards this. Can they please make up their mind? And I know the argument is that investors don't understand quality and this takes away resources from quality writers, but quality writers have lost the war anyways, apparent from the decline in the percentage of quality content. Are they fighting for the right to produce low quality content?

I was not old enough when internet was coming up, but I wonder if there was a similar tone towards internet when it came along. I've seen some quotes in places complaining about how early internet destroys the human part of booking a holiday. That was interesting to read. I wonder if there is a book about this.


> People are writing articles like this saying how AI is useless (also, the sarcasm is so spicy), yet I feel this underlying sense of fear towards this. Can they please make up their mind?

I don't see a contradiction here. "LLMs produce low-quality content at extremely low prices" is something that can reasonably inspire both fear and ridicule simultaneously. (A good word for it might be "dystopian".)

> Are they fighting for the right to produce low quality content?

Paying writers is a great way to produce higher-quality content. But even producing low-quality content is better than being a Walmart greeter.

> I was not old enough when internet was coming up, but I wonder if there was a similar tone towards internet when it came along.

I was, and I do not remember such a tone. (There certainly were moral panics over things like children getting access to porn, though.) The difference (to my mind) is that in the early days of the internet, new technology made clear and dramatic improvements in human well-being. Things like email, mapping software, and online shopping let people do things that were difficult, expensive, or impossible before. (Search for "long tail" to find discussion of the latter.) If anything, there was too much derision in the other direction -- the phrase "buggy whip manufacturers" was thrown around quite a bit if you want to hunt down examples.

Over the last 10-15 years, the big new internet-related tech has been much less beneficial. Instead, we've gotten things like the "gig economy", cryptocurrencies, walled-garden app stores, and algorithmic social media feeds. In terms of human well-being, these changes range from ambiguous to terrible. And the whole time, tech companies and VCs have continued crowing about how they're advancing humanity toward the future, with (apparently) very little introspection about failures like "creating a giant get-rich-quick scam that boosts carbon emissions" or "accidentally funneling millions of young men toward white supremacist propaganda", alongside lesser failures like "SEO winning the arms race against search engines" or "creating user-tracking systems that rival the East German secret police for the purpose of ad sales".

Given that context, a new technology that seems like it's designed to drive writers into unemployment while flooding the rest of us with spam is going to get a fair bit of skepticism.


> Things like email, mapping software, and online shopping let people do things that were difficult, expensive, or impossible before.

The only thing I'll say to this is that the LLM tech is pretty new and we don't know what it's applications are yet. It's more akin to lasers than the internet. Lasers were theorized in physics and even when the first ones were invented I don't think anyone would have imagined how ubiquitous they will become in manufacturing.


I recall those days too, they sadly seem like from another, very different universe compared to 2023. Everybody was optimistic, internet had endless possibilities simply not available before. Of course all highly functioning sociopaths that now form the thick layer of founders/owners of richest companies saw another potential there, and there was nobody to stop them. Thus here we are.

> creating user-tracking systems that rival the East German secret police for the purpose of ad sales

East german police were amateurs compared to what few big companies can (and do) offer to CIA or NSA within few clicks. What we have now is definitely beyond their wildest dreams. I recall not so long ago 1984 was mentioned to scare people. Not so much anymore


Thing is, you can't trust that the take over of a technological innovation will actually improve people's lives, the economic incentives for their use lining up can lead to a local maximum where some balance sheets look better but in every other way people's lives got a little worse, or much worse.

We already saw the web drastically go down in quality when guided by the invisible hand of PageRank and ad networks, content was optimized to appear on the frontpage of Google searches and bring in ad revenue, while being of low quality, extremely sparse in useful content, and melting your computer as your browser renders the ungodly amount of ads that are bundled with it.

That's a very direct precedent already, no need even to make analogies to historical developments happening in entirely different contexts or to other industries.


So what you are saying is, the problem wasn't the internet, it was capitalism.

Only half joking.


Any sufficiently large system will have measurable incentives and disincentives, ceilings and floors, in place of personal loyalties and trust as you would in small groups of people.

The unfortunate outcomes of them of course would be different with different political and economic systems, we know the ones that socialism created in the XX century, we are still getting slapped by the ones created by capitalism and representative democracy now, with new ones coming up hot.


> Is anyone else tired of the snobbery, condescension and down right hate around this topic

Isn't it the logical response to the fanaticism, ubiquitousness and down right obsession that seems to dominate forums like HN and Reddit?

I can't think of anything that could make a topic less interesting than fanboys making every other topic be about that. Every conversation has a "here's what chatgpt said" comment on it.


Aren't you being a bit unfair? Shouldn't enthusiasm around new technologies be encouraged in a technology forum like HN?I get that it can be a bit overbearing but come on it's exciting and people want to express that. It literally came out like a couple months ago, whats so wrong with being excited by something cool? I would have that over this negativity any day.


I hears ya: the eternal September of discovery. Perhaps, like me, you have a cynically high threshold for novelty plus a low regard for repetition? But https://xkcd.com/1053/ seems relevant - trying to empathise with the joy in others and maintain our own childish joy in the wonders.


It's just so low-value, even less valuable than linking to a google search or quora thread.


Low value, yes. However I would give a strong negative value to your very public cynicism. Pissing on people’s excitement is not exactly helping. ChatGPT is revolutionary, even if I personally find the AGI discussions turgid.

It is especially egregious to complain about it when the link is about ChatGPT - why don’t you just avoid commenting on those links?

> linking to a google search

I started doing that a year ago (with duckduckgo links), and I think this is often a genuinely useful thing to do and it saves other people time. Firstly I ensure the search terms are specific, saving the reader the effort to discover what search terms work. Secondly I do it for acronyms - to try and prevent that one person asking what the acronyms are.

I also suggest you reread: https://news.ycombinator.com/newsguidelines.html (I just did).


> It is especially egregious to complain about it when the link is about ChatGPT

Didn't read the article, eh?


Well, there was Clifford Stoll's "Silicon Snake Oil" from 1995 in which Stoll (who had been using computers and the Net for quite some time and had famously caught a hacker spy as described in his "The Cuckoo's Egg"), although he was mostly claiming that the early consumer Internet was overhyped. He was right and wrong. Yes, there was the dot-com crash of 2000, so it was somewhat overhyped, but Stoll also claimed that nobody would want to shop online when they could just go to a store, which clearly wasn't the case.


I really enjoyed The Cuckoo's Egg, I loved his writing. I will not let go of the chance to encourage people to read that book. It's amazing and worth your time.

Will check this out. Thanks for the recommendation.


You can also [buy a Klein Bottle from him](https://www.kleinbottle.com/). I'm very happy with mine.


Did you buy it online? It appears he came around (:


I did! A couple years ago.


I think of current AI as the worlds best bullshitter.

It often sounds good enough compared to what most high school students can write in essays. As the #1 criteria for an essay is length, students gets trained in adding bullshit, as 5 pages of bullshits is acceptable but 1 page of concise text is a fail.

A lot of people don't feel they have enough control over their live, especially if a single accident (or losing your job) can make you homeless.

So I think the underlying emotion is fear. In in dogs fear often leads to attacks, so most dogs that bites are fearful dogs, not aggressive ones.


Snobbery, condescension, and downright hate seems fairly appropriate. To paraphrase someone on social media: "If ChatGPT is the solution, the problem couldn't have been very interesting." Despite this, we're constantly inundated with heavy-handed prophesy about the world-changing effects this technology will have... any day now. The hype cycle is naturally met with resistance.


I wish we had more of the middle. I wish both extremes would go away.


I think what's missing from this is the ROI calculation. They see AI as having degraded quality compared to human authors, but the difference in quality doesn't produce enough profit to offset the cost of hiring a human staff.


> And I know the argument is that investors don't understand quality and this takes away resources from quality writers, but quality writers have lost the war anyways, apparent from the decline in the percentage of quality content.

For some use cases you don't need quality. E.g. where I live there's a free newspaper, the Metro, given out on buses/trains. The articles only exist to get people to read the ads. Most of the articles could probably just as well be written by AI. For all I know, some already are.

But some use cases do need quality, and quality will remain.


> But some use cases do need quality, and quality will remain.

I believe this too, and that is why I find writers being fearful of LLMs a bit weird.


When content farms were on the rise, people feared reputable sources like NYT would lose out because they cannot compete with the quantity of low-quality articles being churned out.

It seems, while NYT is still around, NYT did lose the bar of quality they had along the way, so it was a net loss for good writing.


Funnily enough, if you analyze these articles less from a "is AI crap?" or "AI is gonna take our jobs!" perspective, you get a higher level one. You'll find "AI is a hot topic and I can capitalize by stirring the pot and making click bait about it for profit!".

Many editorial execs don't really care if AI is or isn't crap, gonna take our jobs, or changes the world. They care they can write about it in a way that will draw your attention while filling ad slots on their site.


>the decline in the percentage of quality content

You are not meant to say that out loud, you are meant to show that it doesn't phase you in silent approval. The culture curators have gone to great lengths to leave an opening for your social advancement at the watercooler, it'd be unwise to refuse their offer.


>> I wonder if there was a similar tone towards internet when it came along.

The only thing that comes to mind is that there was a lot of concern at the time that the internet would run traditional newspapers out of business.


Wasn't this a spicy criticism about the state of the publishing industry that is effortlessly replaced by generative computational methods?

Apparently I missed the point, reading again with fearful eyes now!


People are tired of the tech industry crapifying everything. We built the current situation where nothing we do can be trusted and most of it is scams like Uber.


The backlash has just begun. Don’t worry, it will get a lot worse.


I've found that asking the chatbot to develop a course syllabus is pretty useful:

> "Please provide a course syllabus and outline for a three-month long course on literary magazine writing in the style found in magazines like Harpers, the New Yorker, the Smithsonian and similar outlets, with an emphasis on attracting and pleasing a fairly wealthy, well-educated 'high-brow' audience."

This generates a typical college-level course outline. If you then ask it to expand on any specific topics, the results are pretty interesting.

However, attempts at generating unique creative content tend to generate low-quality boilerplate output. For example, this prompt produces some cringe-worthy drivel, probably related to the 'alignment' imperative:

> "Provide a brief outline and sample paragraph of a narrative non-fiction article about teaching high school students mathematics in a urban slum setting where student's parents are often addicted to drugs and alcohol and there is a high level of violence and property crime."


Misread that as Total Carp. A fishing magazine written by AI would be niche, but maybe there's a market bubble for it.


> You may be skeptical about machine-written work at first, but once you see the software rearranging familiar-seeming paragraphs into different orders and changing a few words, you’ll realize it’s a suitable replacement for your favorite authors, who can now rest and starve.

Fear disguised as as smug misunderstanding isn't a good look. I liked the AI salad lady piece better and it was actually funny.


Where is the pdf?


Without the word "Introducing" it sounds like this is a third party saying said magazine is total crap. But whatever.


>Written Entirely by AI by Jonathan Zeller

So which is it, AI or this "zeller" deepfake person?


Finally, it's making comparisons between AI mania and Bitcoin/Web3 mania.


Aptly named.


3.5:

In a realm of words, where AI's antics unfold, There exists a magazine called Total Crap, I'm told. Its pages bear AI-generated prose, A cacophony of gibberish that nobody chose.

Total Crap magazine, a nonsensical sight, Where algorithms dance, devoid of true insight. In twisted verses, logic takes its bow, Leaving readers bewildered, wondering how.


I think there's two categories of submissions on HN that I read. One is rumination about various tech topics and the other are technical teardowns or step by step how to run Linux on a Kindle, etc

I think the rumination ones can be replaced by AI right now. They don't really serve much practical purpose and are more for entertainment reasons while the technical ones teach you concrete things you are unlikely to reproduce without significant effort.

Personally I'd like to see the latter more celebrated because they entail a lot of work even if they're not as emotionally jarring.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: