Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
If a PR Says “Artificial Intelligence,” There’s a Good Chance It’s Meaningless (slate.com)
137 points by giuliomagnifico on May 6, 2023 | hide | past | favorite | 71 comments


It's far easier to do productive, useful things with AI if you just treat it as a tool, like a chainsaw or a pulley.

Everyone agrees there's a lot of hype right now but no one thinks _they're_ a part of it. You know why? It's because we're told we're on an exponential growth (we are), so everyone is trying to force it. So each time there's a breakthrough, we're desperate for it to be rocketship that takes us there.

It's time to step back and let the exponential thing happen on its own. We didn't get here because people in the 20th century sat down and plotted a way to achieve exponential growth; it just sorta happened.


>It's far easier to do productive, useful things with AI if you just treat it as a tool, like a chainsaw or a pulley.

I am seeing a lot of comments paraphrasing this, without pointing to anything.

A lot of comments also say confidently and repetitively that AI is different from crypto.

The best use case I have found so far for ChatGPT is...editing HN posts. [1] But after being mildly satisfied with it once, it seemed like too much bother to use it regularly. Like being a nobody and getting an autopen [2] to sign for you.

But more than a month ago, there was a "Show HN" by someone who claimed that they had an AI-powered solution to writing SQL. [3] The tagline was literally Never write SQL again. That sure sounds like something that could replace real people's jobs, that could be spun into a multibillion dollar market cap.

I tried it, made an attempt at a constructive comment without being negative, and there was not one response, from the submitter or anyone else.

I could explain in scathing terms how useless it appeared, but anyone capable of understanding what writing code is could read between the lines, and nobody like that engaged, so I let it lie.

What is a reasonable person to think about real applications?

[1] https://news.ycombinator.com/item?id=35487015

[2] https://en.wikipedia.org/wiki/Autopen

[3] https://news.ycombinator.com/item?id=35427229


I've noticed the same thing. So many proponents of AI don't validate their results.

I'm reminded of the Microsoft "make more robust" AI feature in VSCode. Their flagship example screenshot was flat out wrong.

The starting code is an html form with a clear bug. It has an onclick rather than onsubmit handler, which means pressing the enter key won't submit the form properly.

Their advertised fix doesn't address that issue. Instead it adds a CSS vendor prefix. First, manually adding vendor prefixes is almost never the right solution, just have one of the existing tools do that automatically. Second, this specific vendor prefix was only in use for a very short period of time years ago. So almost all users currently use browsers that don't need it, and almost all users of outdated browsers aren't helped by the prefixed version.

And this is a case where Microsoft would have had subject matter experts right down the hall from whomever wrote this announcement. It makes me even more skeptical of applications outside of tech.


Generative AI is at the point where it can BS at a college level. This is great for applications where things don't need to be right (disinformation bots, term papers), but doesn't work when there's little room for error (software engineering). It's already being used to write ad copy; I'm waiting for the first lawsuit because of a deceptive ad on Google.


I just use it as a super search engine and validate the results myself. Sometimes I only know 1/2 the words I need to find something so I just ask GPT-4 to give me 10 things it could possibly be then I look them up, rather than waste time with pedantry on web forums trying to find the words that way.

I fail to understand why people get so up in arms about how some people find use in these tools. Does not work for your niche? Cool, then just don't use it.


That's how I use it too. You try to search some topics on search engines nowadays, you get a lot of blog drivel with the same cliches all over and very little on the way of factual explanations. You ask ChatGPT and it gives you one to four paragraphs straight to the point. You can query it further and validate what it says with literature or more focused searches.

It feels like I'm getting massively subsidized by the AI hype tho. Even if I were to pay 20 dollars a month for that, it wouldn't come close to covering how much it cost to train and host it.


But we are here because they drove hard for each big breakthrough that would “change the world” — and many did, from mechanization to electrification to digitization. (To say nothing of agriculture, chemistry, and medicine.)

That growth curve doesn’t “just happen”, but is precisely the result of people trying to constantly push for the next Big Thing (TM).


Of my concerns with AI, the primary one wasn't that it would really replace people but it would be yet another means for people to claim "computer says no" as they fire people. Of course, like most of the C-suite, they'll never face the consequences of their own actions given they'll cut even more to make sure they'll still take their bonus that year.


I thought the title was about Pull Requests. Ah words.


Yes same, but also the title still made sense.


Same, pictured the op building a filter to deny PR's based on containing the words AI


There is no artificial intelligence. There are only systems that are very good at generating text or images when given a text prompt. That's not intelligence, and it's misleading if not dangerous to call it that.


Two generations ago we were calling chess programs AI. Last generation it was face detection in images. Each generation makes a new clever thing, calls it AI for a while, puts it to good use and forgets about it for the next shiny thing called AI for real this time.

Don’t get me wrong: the new large models are amazing, and will be useful. Not dissing them. Just observing the history of the term ‘AI’ in popular usage.


This is a irony I've noticed too. Some engineers tell me their managers are in a state of panic about how to "best make use of AI" (translated: how to ride the current hype wave to get nice PR points). The correct answer of course is "we already use AI and have been doing so since the 80s".

Considering it is ridiculously impractical to use ChatGPT in many areas where 1980s AI techniques are in use (think metaheuristics, chess, etc.), I suspect this is rather common.


There is an old quote about this, I’m too lazy to find at the moment (from the 80s or 90s?). Gist was that we call the new thing AI until we really understand it, then we start calling it machine learning.

Lather, rinse, repeat.


It would be nice to have a table where, say, column A has "linear regression," and column B has something the business already uses that for.

Sure, we can also associate supply chain and logistics bin packing, inventory forecasting, existing warehouse automation--including computer vision and such.

But once you get to NLP, deep learning, and LLM, doesn't it kind of go off the rails?


> Each generation makes a new clever thing, calls it AI for a while

I come from big ag country. We used the term AI in regards to a breeding method. Seeing "AI" in all these headlines still gives me a chuckle.


Oh my. I hadn't made that connection. Now I'm not going to be able to unsee it.


Two generations ago we were calling chess programs AI...

I think this observation held up well until AlphaGo hit the scene. Then it started to sound a bit less insightful. At this point, it's just whistling past the proverbial graveyard.


> There is no artificial intelligence. There are only systems that are very good at generating text or images when given a text prompt. That's not intelligence, and it's misleading if not dangerous to call it that.

Disagree - GPT4 definitely has a level of reasoning, can abstract problems, can apply knowledge in new and different ways, so it definitely meets some definitions of intelligence.

What's your definition of intelligence that GPT4 wouldn't fulfill at some level? (Personally I believe it's possible to have intelligence without sentience/self-awareness)


It's missing generality. It can't do basic maths, or even follow instructions. It can't make valid moves in a game. It can only do what it was trained to do: make a good guess at what word should come next.

I don't mean to down-play how impressive models like GPT-4 are - they are very impressive and have great utility.

The problem is we are talking about PR, and when you throw around words like "artificial intelligence" it misleads everyone who is not intimately familiar with the limitations of such models.

> Personally I believe it's possible to have intelligence without sentience/self-awareness

The general public doesn't make that distinction when you throw around terms like artificial intelligence. They hear artificial intelligence and imagine "thinking machines".

No doubt people will say this is just moving the goalposts - that whatever these models achieve it will never be "AI", but that's demonstrably false. The concept of AI has existed in film and other media in the same form since forever, and it's pretty clear if you put those representations side by side with what we actually have today, that we are not even close yet.

For some reason "advances in machine learning" just isn't hip enough to get funding, so "AI" gets thrown around instead.


> It can only do what it was trained to do: make a good guess at what word should come next.

Disagree - GPT's mechanics are that it guesses what word/token should come next, however it is now displaying emergent capabilities which do show an agent which can generalize. Maybe not to 'AGI' levels, but it is starting to show little signs of it.

> It can't do basic maths, or even follow instructions. It can't make valid moves in a game.

It can do basic maths - better than most humans (but it cannot do maths better than a calculator). If I ask GPT4 to calculate 51 times 102 it will get the answer, despite most humans struggling with this task. It does this just via internal representation without any specific 'calculator' API call or functionality.

It can follow instructions - if I tell it to take the sentence "abcd" and then tell it to follow the following steps: "1) swap b and c around, 2) reverse the string, 3) ignore step 1", it will output "dcba" and describe how it reached that output

It can make valid game moves - Without being formally taught chess it will make valid chess moves most of the time (see: https://www.lesswrong.com/posts/xyjhFCSSXZsW6HDBb/a-chess-ga...). It made 29 valid moves before making invalid moves. Again, GPT4 has not been specifically trained to play chess, it just has picked up some chess rules and games while reading the internet and this has somehow formed some sort of internal representations for chess within it's model. I can also describe a new/novel game or puzzle to GPT4 and it can have a go at playing and solving it.

What it can't do is learn at the moment, although learning isn't part of a lot of definitions of intelligence, which is why we have to define intelligence before you can say if it is/isn't intelligent.


It doesn't even "understand" words and what they are made of. This is an extremely simple task that a child could do, and yet:

- Give me a list of US and European officials with names ending in "-land"

1. Mary Landrieu - former US Senator from Louisiana 2. Harold Landry - US Congressman from Tennessee 3. Jacky Rosen - US Senator from Nevada 4. Chris Van Hollen - US Senator from Maryland 5. Brian Mast - US Congressman from Florida 6. Roland Rudd - British businessman and political campaigner 7. Sadiq Khan - Mayor of London, England 8. Martin Schulz - former President of the European Parliament 9. Angela Merkel - Chancellor of Germany 10. Mark Rutte - Prime Minister of the Netherlands 11. Stefan Löfven - Prime Minister of Sweden 12. Erna Solberg - Prime Minister of Norway 13. Alexander Stubb - former Prime Minister of Finland 14. Kaja Kallas - Prime Minister of Estonia 15. Xavier Bettel - Prime Minister of Luxembourg.


We can point to a handful of strange examples where GPT4 appears to be unsuccessful, but that ignores the buckets of examples where it knocks things out of the park.

And the areas where it isn't successful appear to being closed quickly as new discoveries are found - GPT4 is dramatically better at maths and truthiness than GPT3.5 for instance (still not perfect at either, but improved enough).

GPT4 can do things a child can't do, and a child can do things that GPT4 can't do. You don't have to be able to do everything a human can do to be classed as intelligent, and GPT currently underperforms humans in some areas and overperforms them in others.


Given a list of officials, absolutely, and if there is no list, nobody would just start spouting off names that don't match the rule at all -- and the failure here is due to inability to apply basic stated directions in a logical manner. Most likely this is due to the processing of tokens not being on a character level, but it will also imply that it possesses character-level language knowledge if you ask it to produce a list of infrequent trigrams in the English language:

> As an AI language model, I do not have access to a pre-existing list of the least frequent trigrams in English. However, I can generate a list of some of the rarest trigrams based on the frequency of occurrence in a large corpus of English language text. . . . (list follows)

How can I be sure of this list it claims to generate when I have evidence it can't identify substrings?


Inability to complete certain tasks does not mean that the system is not intelligent.

It’s possible for a system to be intelligent and also fail at these questions.

IMO a system is intelligent if it can answer some questions that require intelligence (by definition) - it does not have to be able to answer all questions.


I hadn’t heard of “truthiness” outside of Boolean comparisons in programming languages, but when I looked it up, it doesn’t seem like something worth aspiring to:

> Truthiness is the belief or assertion that a particular statement is true based on the intuition or perceptions of some individual or individuals, without regard to evidence, logic, intellectual examination, or facts. Truthiness can range from ignorant assertions of falsehoods to deliberate duplicity or propaganda intended to sway opinions.

https://en.m.wikipedia.org/wiki/Truthiness


Yeah the previous comment is almost completely wrong. But on last point, it didn't get here by hard-coding and the reinforcement learning part at the end i think shows it definitely can "learn". I guess one thing I would still point to is it has a weakened sense of "self" still I think, which goes inline with it distinguishing things learned in training vs in deployment (though maybe this is just intentional idk)


Yes I agree - I really meant it doesn't directly learn from your interaction (i.e. it forgets you spoke as soon as you close the session, and only learns indirectly through the next training run - your words do not affect it's internal representation in real time like humans). The learning appears to require more information than humans too, who can do 'more with less'.

You are right that it clearly can learn though, the learnings are just baked-in with each cycle.


I think the Microsoft Tay incident is reason enough that OpenAI know it would be a bad idea to structure things in such a way that the internet can wreck it in a manner of hours, but I certainly think they could make it learn in real-time if they wanted to. They are running RLHF at least with the current ChatGPT, but in theory maybe they could do something like each user gets their own LoRA and it uses their conversations to fine-tune a version specific to them.

I suspect we'll start seeing ideas akin to that tried in the open source community.


> I can also describe a new/novel game or puzzle to GPT4 and it can have a go at playing and solving it.

It’s interesting that you brought up chess. It can do chess reasonably because there is a huge amount of chess data on the web. In that sense, it is not too surprising to me. If someone several years ago had said “I scoured the entire internet for chess-related text and fed it into an AI model, and it can play at a low-amateur level” I would be impressed but I wouldn’t be hailing a new era of general intelligence.

An example that illustrates that huge amounts of specialized data is needed for it to do any particular task: I fed GPT-4 the rules of Duck Chess. Duck Chess is exactly like regular chess but after each move, the player who just moved takes the rubber duck and places it on any empty square (there is just one duck shared between the players, and you have to move the duck: you can’t leave it on the same square for consecutive moves). Pieces cannot move through or stop on the duck. The game eliminates the concepts of check and checkmate, and ends when a player captures the opposing king.

I have given a description of duck chess to many humans (yes, I love duck chess!) who are usually much worse than ChatGPT at regular chess. When these humans play duck chess for the first time, they intuit some basic principles: use the duck to block natural developing moves for your opponent in the opening; you can often capture a defended piece without consequence by placing the duck between your capturing piece and the defender; if you want the duck to not be on a certain square for your next move, then put it on that square after your own move, since your opponent is obligated to move it; and so on.

GPT-4 meanwhile utterly fails to play the game. More often than not, it will try something illegal: putting the duck on an occupied square, passing a piece through the duck as though it weren’t there, or attempting to capture the duck after being told that’s not possible. When it does play legal moves, the duck placement is nonsensical. When asked why it placed the duck where it did, it betrays a lack of basic understanding of the rules. Its explanations tend to forget that its opponent gets to move the duck themself after their move.

This is where the “but humans make mistakes too!” arguments break down. No human who can play regular chess at the level of GPT-4 would continually struggle to make legal moves in duck chess. 99.999% of them would make better moves than GPT-4.

To me, this supports the idea that GPT-4 is great at finding and exploiting patterns that it has seen millions of times in the training set. When you veer off the training data (and your problem isn’t a trivial interpolation of related concepts that are in the training data) it seems to fall apart completely.

As one more example, GPT-4 contains some very basic facts about the game Arimaa, an abstract strategy game like chess. It can recite the rules perfectly. But I can’t play Arimaa with GPT-4 because it fails on the very first step: choosing how to arrange your pieces. I once exhausted all 25 of my messages trying to get it to make a legal configuration of its pieces to start the game, to no avail.


> Disagree - GPT4 definitely has a level of reasoning, can abstract problems, can apply knowledge in new and different ways, so it definitely meets some definitions of intelligence.

That's from the humans who generated the original text that GPT4 is essentially mining.

Furthermore, until you have a concrete definition of exactly what intelligence is, then the question (and answer) don't mean much.


Did you invent everything you know? …or did you learn from humans giving you training examples?


I vote that we humans claim we’ve solved “AI” and now have AGI and move on with solving actual real world problems instead of pumping billions of dollars of time and money into “AI” and playing with computers all day.

Climate change is some real shit.


It writes better texts than many adult humans, and outside of a persons area of expertise it probably performs better than an adult human as long as the problem can be described in text.

It is overly optimistic, and does make mistakes, but it certainly shows intelligent behaviour.


It’s interesting that we view the fact that it makes mistakes as evidence it’s not intelligent. By this metric then, humans are also not intelligent.


What's interesting to me is how ChatGPT with GPT-4 can fail to follow your instructions randomly (for the same instructions, in the same session). That's these kind of behaviors that show it is, in fact, not intelligent, but that probabilities line up properly for the emergent behavior to look like it is.


By that measure most humans are not intelligent … go 10 times to a restaurant and tell me the failure rate of getting precisely what you want :)


If that restaurant is McDonald's or KFC then about 10% ~ 15%?


A rock doesn’t make mistakes, it’s awesome at being a rock.


It's just semantics. Some people use intelligence to mean sentience, others use it to mean directed problem solving / solution-finding.


Could you give an example of people who explicitly claim that sentience is equivalent to intelligence? I’d agree that some arguments trade on an equivocation between the two, but I don’t think their adherents intentionally do so.


The post I replied to seemed to make the conflation. Because “just guessing the next word” seems like a reasonable refutation to sentience, but not to intelligence.

I agree that it’s not usually an intentional mixup. From what I’ve seen it’s mostly a lack of rigor in defining terms.


Imo, it's time to define tiers of intelligence, and then say "GPT4 is a level 3 AI". We can start with observing animals and birds to define the basic level 1 intelligence: it's an ability to react to environment, but they cannot learn because they don't have much memory. Level 2 adds sub-conscious memory that enables learning basic tricks, but learning is very slow. That's monkeys. Level 3 adds conscious memory: the subject is aware of its existence and can use it to remember faces or learn complex behaviours. That's early humans. Level 4 would introduce the concept of thought: an imaginary object that can be manipulated to model reality. Level 5 is abstract thought: it's no longer limited to real things and can model abstract concepts that's never been seen by the subject.


Level 5 might be recursive - thinking about thinking. Or it might be observable thought - the ability to watch yourself think.


> Imo, it’s time to define tiers of intelligence

First, we would have to define the concept of intelligence.


Words change over time. Plenty of words have changed meaning in my lifetime to become less formal than they once were. And I guarantee I used to be this pedantic about some of them, particularly around the usage of grammar and punctuation, and in some cases am still very stubbornly pedantic even though it’s clear the world has moved on and I’m the only one left on the hill.

The phrase “artificial intelligence” is the same here. You may still die on this hill, but the world has moved on. AI has become overloaded to mean a wide variety of things and not just the “dangerous” generalized terminator robot you’re envisioning.

One strategy to cope is to come up with a new word and define it as you’d like, being careful to not overload it.


So when a LLM writes a credible epilogue to an existing novel [1], that's not "intelligent?" Sounds a bit like a god-of-the-gaps argument.

1: https://huggingface.co/mosaicml/mpt-7b-storywriter


> There are only systems that are very good at generating text or images when given a text prompt.

How is that not intelligence?


To me it's a tech colloquialism for a thing that uses a neural net in some capacity. I think its misleading to use it to describe other types of ML (i.e., SLAM robotics)


Classical SLAM by optimizing constraint graphs has little to do with ML. The optimization function is explicit, and there’s no training.


There are arguments for this position that are philosophically and scientifically respectable and to which I am inclined. But merely repeating this nostrum isn’t very constructive. Of course, I’m also repeating nostrums here, but I am not making any definitive claim here.

1. Intelligence is surely best characterised not bivalently; if, then, it’s a matter of degree, it is at least somewhat non-trivial to show that LLMs make no progress whatsoever on previous AI (soi-disantes).

2. It’s also unclear that intelligence is best characterised by a single factor. Perhaps that’s uncontroversial in the psychometric literature (I wouldn’t know), but even then, why would g be the right way of characterising intelligence in beings with quite different strengths and weaknesses? And, if ‘intelligence’ admits multiple precisifications, the claims that (a) some particular system displays intelligence simpliciter (perhaps a pragmatic notion) on one such precisification and (b) that some particular system displays some higher level of intelligence than before in that respect are yet weaker and more difficult to rebut.

3. It’s unclear whether ‘are’ is to be construed literally (i.e., indicatively) or as a much stronger claim, e.g., in the Fodorian or Lucasian vein, that some wide class of would-be AI simply can’t achieve intelligence due to some systematic limitation.


I had a feeling you had something interesting to say, but I couldn’t make heads or tails of it.

I am being serious. I just applied GPT4 to ELI5 your comment to me. I feel like I have crossed some threshold and I’m not sure if I am proud of it, but there it is.

I am still not entirely sure what your main point is, but I learned about the Fodorian and Lucasian arguments which I don’t find particularly impressive, but then again, I need a language model to explain things to me. Interesting nonetheless.

I did not know about “g” either. How did I survive for so long you may ask and it is indeed a miracle. Anyway, a nuanced understanding of intelligence seems reasonable and useful.

The concept “nostrum” was also new to me as was the word “simpliciter”. In fact I asked GPT4 for a table of uncommon words and concepts in your post and it was quite substantial.

All in all I rarely come across a post that makes me feel like an ape and sets me on a path of creativity and knowledge. Thank you for that.

Edit: obligatory response in the proper style:

“The manner in which this prose is articulated can be characterized as both prolix and imbued with a certain aesthetic appeal. One is left to ponder the origins of the author's stylistic inclinations. As for my own stance concerning the matter of artificial intelligence, it may be succinctly encapsulated as follows: "AI demonstrates utility, and as such, it possesses merit." This, regrettably, constitutes the extent of my intellectual engagement with the subject.”


You’re right—the way I wrote the original comment isn’t particularly easy to parse. I’d been reading philosophy of language all day, and didn’t pause to edit. I’ll try to rephrase here; I don’t think the points are particularly complicated, so if the exposition here is still inadequate, that’s my fault.

1. On one view of intelligence, something is either intelligent or not. On another, some things are more intelligent than others, but there’s no clear cutoff between intelligent and non-intelligent things. On the first view, it seems quite plausible that actually existing ‘AI’ (e.g., GPT) doesn’t count as intelligent. On the second view, actually existing ‘AI’ seems to be at least somewhat intelligent: more so than most other software we’ve written. If the second view is right, it’s unhelpful in many cases to simply pronounce things intelligent and unintelligent.

2. By way of analogy, suppose I say that a walking route is quite hard. I might mean that it’s very long. Or I might mean that it’s hilly. Or I might mean that it’s very boggy. Each is a perfectly good reason to say that the route is hard. So a walk that’s merely quite hilly counts as hard, even if it’s fairly short and the ground is dry.

We might say that attributions of intelligence are similar. If so, we can attribute intelligence to systems for many different individually respectable reasons. Perhaps a system is intelligent because it can respond to novel situations in some appropriate way. Perhaps it’s intelligent because it predicts a certain statistical parameter correctly. Perhaps it’s intelligent because it’s small but can correctly deal with a wide range of situations.

If the analogy is right, it would be odd (perhaps wrong) to say that a system good at one of these just isn’t intelligent because it falls down on the other measures. If so, surely GPT counts as intelligent for at least one respectable reason or another.

On the other hand, suppose I call someone tall. There’s only one way to be tall. Being fat, or having muscly arms, or having long legs but a short torso don’t count. So the analogy doesn’t apply to all concepts. Does it apply to intelligence? Initially, it might seem that it doesn’t: surely there are lots of ways to be intelligent. But I’ve heard that the psychometrics literature suggests that all these measures correlate to a great degree, and statistically can be predicted by a single-factor model (thus ‘g’). That might suggest that there really is only one way to be intelligent, and that appearances are misleading.

I am not familiar with the psychometrics literature, so I wouldn’t know; maybe the single-factor model is wrong. But my point is this. Even if the single-factor model is right, it’s only been shown to be right about humans (so far): their statistical base has comprised humans. So maybe a multi-factor model of intelligence works better for would-be machine intelligence. For example, perhaps arithmetic ability in humans is predicted well by a single factor; maybe it’s even reducible to some single form of intelligence. But we can obviously separate arithmetic ability from e.g. analytic ability in computers, to an almost arbitrary extent, by making very good calculators. (And LLMs are often not very good at arithmetic, though I gather that’s being improved.) If that is so, intelligence is more like difficulty of a walking route than tallness. And so that’s another reason to avoid straightforward denial that would-be AI is or could be intelligent.

3. It’s quite plausible that no presently existing would-be AI should count as intelligent. But we don’t know whether that’s a general limitation or not. And if there are general limitations, how general are they? For example, maybe LLMs couldn’t be intelligent but some GOFAI type thing could be. Or maybe we simply need a new architecture.

One argument we could read in Fodor is that neural networks have to implement a so-called language of thought to be meaningfully intelligent. That would be quite a general limitation, though arguably one we could overcome. (I’ve always been a bit confused by what Fodor really meant by a language of thought, and in particular what he required of mental representations, but I haven’t made a full study of him yet.)

A much stronger argument from J.R. Lucas is broadly ‘anti-mechanism’, which would roughly include everything we can presently engineer or can be run on a Turing machine. This is very strong, and not many people agree in my experience.

The point of my comment is that these matters are complicated, and the comment above didn’t really address these complications. Sometimes nuance doesn’t add much or isn’t worth it. (I quite like Kieran’s ‘Fuck Nuance’ as a lesson for all theorising, not just sociology.) But sometimes it does matter. ‘[T]here is no artificial intelligence’ is hasty enough to require a response.


you could have said all of that in a much clearer way with no loss of content


Ironically ChatGPT would have made whatever point the author was trying to make (I’m not really sure what the point was to be honest…) much more succinctly and clearly. Maybe one use of ChatGPT as an anti-bloviating converter.


ChatGPT would have used simpler words, but it bloviates like hell. It’s one of the reasons that people on HN can usually spot and downvote ChatGPT-generated comments: they go on and on for multiple paragraphs when their point can be made in one or two sentences, and they go to great lengths to hedge everything they say.


I’m not sure that the point can be made much more succinctly, although I agree the original wasn’t written very understandably. I’ve reformulated my point at greater length in response to the sibling.

I’ve tried GPT on some topics in philosophy of language, and it hasn’t really done particularly well. I don’t have any strong reason to think that such limitations will either persist or be overcome, however.


I agree—I should have edited it. I’ve responded to the sibling comment.


If it’s a PR, there’s a good chance it’s meaningless.


I was on a webinar this week where the speaker suggested that ‘even if you’re selling t-shirts, call yourself a generative AI company.’ I immediately left the webinar.


Peak is when Kodak shares rocket because of something something AI


"A.I.-PR industrial complex" is a bizarre term, presumably coined by the author. I'm not sure what they mean by it, but seeing as "crypto" was one of its predecessors (as in the crypto-industrial complex?? The crypto press release industrial complex??), I'm guessing: not much.

> There’s just too much attention that comes with saying the term “A.I.” for anyone to stop now. ChatGPT isn’t the only part of the A.I. boom that sometimes just makes stuff up.

This is absolutely the case.

P.S. what is a "threadbois"?


"PR" means "public relations" in this context, not "press release." PR is deployed by capital to influence public opinion, drive interest & investment, etc. It is true on its face AI is in a PR-driven hype cycle just like crypto before it. Having a hype cycle doesn't mean there's no value in the thing being hyped, only that its perceived value is being driven higher during the hype cycle.

For "threadboi", here's a good explanation https://letmegooglethat.com/?q=threadboi


A heady red wine made of grapes from the Threadbois region.


What we call 'journalists' are people with an iphone and a laptop, in an openspace from 9 to 5 and on twitter all day long.


No; that's what media outlets now call "journalists." Discerning citizens (especially those of us with legitimate journalism degrees and experience) don't.


Real Journalism Has Never Been Tried


Well, someone tried, but they sometimes die prematurely.


So far, Chat GPT has failed the Turing Test worse than the Ouija board, which, in my experience, didn't fail at all.

This plus the Fermi paradox gives me the creeps.

It raises the uncomfortable question of "free will".


prefix: Now that there's no economic advantage to investing,




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: