Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Is anyone else getting AI fatigue?
205 points by graderjs on Feb 9, 2023 | hide | past | favorite | 365 comments
AI is great. ChatGPT is incredible. But I feel tired when I see so many new products being built that incorporate AI in some way, like "AI for this..." "AI for that..." I think it misapplies AI. But more than that, it's just too much. Right? Right? Anyone else feel like this? Everything is about ChatGPT, AI, prompts or startups we can build with that. It's like the crypto craze all over again, and I'm a little in dread of the shysters again, the waste, the opportunity cost of folks pursuing this like a mad crowd rather than being a little more thoughtful about where to go next. Not a great look for the "scene" methinks. Am I alone in this view?



Engineers are always building things that are incredible, then turning their back on it as ordinary once the problem is solved, “oh that’s so normal, it was just a little math, a little tweak, no big deal”.

AI has gone through a lot of stages of “only X can be done by a human”-> “X is done by AI” -> “oh, that’s just some engineering, that’s not really human” or “no longer in the category of mystical things we can’t explain that a human can do”.

LLM is just the latest iteration of, “wow it can do this amazing human only thing X (write a paper indistinguishable from a human)” -> “doh, it’s just some engineering (it’s just a fancy auto complete)”.

Just because AI is a bunch of linear algebra and statistics does not mean the brain isn’t doing something similar. You don’t like terminology, but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is?

Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would. What would be left? The human is computation also, unless you believe in souls or other worldly mysticism. So why not think eventually AI as computation can be equal to human.

Just because Github CoPilot can write bad code, isn't a knock on AI, it's real, a lot of humans write bad code.


The problem with LLM in my view is that they're capped at what already exists.

Using them for "creative" things, is that they can parrot things back in the statistically average way, or maybe attempt to echo it in an existing style.

Copilot cannot use something because it prefers it, or thinks it's better than what's common. It can only repeat what is currently popular (and will likely be self reenforced over time)

When you write prose or code you develop preferences and opinions. "Everyone does it this way, but I think X is important."

You can take your learning and create a new language or framework based on your experiences and opinions working in another.

You develop your own writing style.

LLM cuts out this chance to develop.

---

Images, prose, (maybe) code are not the result of computation.

Two different people compute the same thing they get the same answer. When I ask different people to write the same thing I get wildly different answers.

Sure ChatGPT may give different answers, but they will always be in the ChatGPT style (or parroting the style of an existing someone).

"ChatGPT will get started and I'll edit my voice into what it generated" is not how writing works.

It's difficult for me to see how a world where people are communicating back and forth with the most statistically likely manner is good


All artists of every stripe have studied other art, have practiced what has come before, and have influences. What do you think they do in art school; they copy what came before. The old masters had understudies, that learned a style. Is it not an old saying in art that ‘there is nothing original’. Everything was based on something.

Humans are also regurgitating what they ‘inputted’ to their brain. For programming, isn’t it an old joke that everyone just copy/paste's from stack overflow?

Why if an AI does it (copy paste), it is somehow now a lesser accomplishment than when a human does it.


> Why if an AI does it (copy paste), it is somehow now a lesser accomplishment than when a human does it.

Because the kind of 'art' the AI will create will end up in a Canva template; it will be clip art for the modern Powerpoint or Facebook ad. Because corporations like Canva are the only ones that will pay the fees to use these tools at scale. And all they produce is marketing detritus, which is the opposite of art.

Instead of the "Corporate Memphis" art style that's been run into the ground by every big tech company, AI will produce similarly bland, corporate-approved graphics that we'll continue to roll our eyes at.


It's a fair point.

My concern is with the limitations in the creation of new styles.

I guess my view is that you send 100 people to art school and you get 100 different styles out of it (ok maybe 80).

With AI you've got a handful of dominant models instead of a unique model for each person based on life experience.

Apprentices learn and develop into a master. If that works is all moved to an LLM, where do the new masters come from?

---

I take your point about the technology. I have a hard time saying it's not impressive or similar to how humans learn.

My concern is more with what widespread adoption will mean


The style can be influenced, however. It isn't unreasonable to suggest an AI that fine tunes the style of the LLM output to meet whatever metric you're after.

As far as creativity goes, human creativity is also a product of life experiences. Artistic styles are always influenced by others, etc.


I generally agree that we quickly adjust to new tech and forget how impactful it is.

But I can’t fully get on board with this:

> but how is re-enforcement “Learning”, not exactly the same as reading books to a toddler and pointing at a picture and having them repeat what it is? Start digging into the human with the same engineering view, and suddenly it also just become a bunch of parts. Where is the human in the human once all the human parts are explained like an engineer would.

The parent teaching a toddler bears some vague resemblance to machine learning, but the underlying results of that learning (and the process of learning itself) could not be any more different.

More problematic than this, while you may be correct that we will eventually be able to explain human biology with the precision of an engineer, these recent AI advances have not made meaningful progress towards that goal, and such an achievement is arguably many decades away.

It seems you are concluding that because we might eventually explain human biology, we can draw conclusions now about AI as if such an explanation had already happened.

This seems deeply problematic.

AI is “real” in the sense that we are making good progress on advancing the capabilities of AI software. This does not imply we’ve meaningfully closed the gap with human intelligence.


I think the point is that we have been “meaningfully closing” the gap rapidly, and at this point it is only a matter of time, the end can be seen, even if currently not completely written out in some equations.

It does seem like on HN, the audience is heavily weighted towards software developers that are not biologist, and often cannot see the forest for the trees. They know enough about AI programming to dismiss the hype, and not enough about biology, and miss that this is pretty amazing.

The understanding of the human ‘parts’ are being chipped away, just as quickly as we have had breakthroughs in AI. These fields are starting to converge and inform each other. I’m saying this is happening fast enough that the end game is in sight, humans are just made of parts, an engineering problem that will be solved.

Free will and consciousness are overrated, we think of ourselves as having some mystically exceptional consciousness, which clouds the credit we give advancements in AI. ‘AI will never be able to equal a human’, when humans just want lunch, and our ‘free will’ is based on how much sleep we got. DNA is a program; it builds the brain that is just responding to inputs. Read some Robert Sapolsky, human reactions are just hormones, chemicals, responding to inputs. We will eventually have an AI that mimics a human because humans aren’t that special. Even if the function of every single molecule in the body, or every equation in AI, is all fully mapped out, enough is to stop claiming 'specialness'.


> I think the point is that we have been “meaningfully closing” the gap rapidly

In your opinion, how wide is this gap? To claim that it is closing at a meaningful pace brings the implication that we understand the width. Has anyone made a credible claim that we actually understand the width of the gap?

> The understanding of the human ‘parts’ are being chipped away, just as quickly as we have had breakthroughs in AI.

This is a thinking trap. Without an understanding or definition of the breadth of the problem space, both fields could be making perfectly equivalent progress and it would still imply nothing regarding the width of the gap or the progress made closing it.

> These fields are starting to converge and inform each other.

Collaboration does not imply anything more than the existence of cooperation across fields. Do you have specific examples where the science itself is converging?

My understanding is that our ability to comprehend neural processes is still so limited that researchers focus on the brains of worms (e.g. a flatworm’s 53 neurons), and we still don’t understand how they work.

> and at this point it is only a matter of time, the end can be seen

Who is claiming we have any notion of being close enough to see the end? Most experts on the cutting edge cite the enormous distance yet to be covered.

I’m not claiming the progress made isn’t meaningful by itself. I’m struggling with your claim that we have any idea how much further we have to go.

Landing rovers on Mars is a huge achievement, but compared to the array of advancements required to colonize space, it seems like just a small step forward in comparison.


You are right, I'm playing fast and loose with some assumptions and opinions without citations.

I just don't like falling into the other trap of wasting my day to write a complete paper with citations for some loosely defined internet argument on a subject that is already stacked on a pile of controversy and misunderstandings. I think I could easily find a number of citations that have conflated vocabulary, or re-defined vocabulary. This is my opinion, don't think I need to document a cited cross reference list of these re-defined terms to say this.

Probably this is the same problem that exists between a research paper, and a popular science book. Neither is as detailed and exact or also as high level and understandable as everyone desires. So, yes, these are some opinions, just from a certain point of view, my opinions are more correct than others opinions.


The point isn’t that you need citations - it’s that there is nothing to cite that can credibly inform us as to the size of the remaining gap.


Well. I'm really not trying to get the last word here. But if this is a problem, that if we don't have any a way to credibly inform the future, then we can't talk about it, then how can we ever talk about anything. There is entire cottage industry of futurists that don't have any way to judge how far off there predictions are, to inform the remaining gap. Maybe you have same issues with them. And maybe I do too really, I'm pretty perturbed by so many researchers switching context and vocabulary to fit their own narrative. I'm just some joe schmo with an opinion, and am only pointing out that advancements have been occurring at a really rapid pace and almost universally (opinion) all predictions have been wrong so far. So maybe this gap will close rapidly.

Or more to the main post, a lot of head down engineers cranking out solutions do loose sight of how far they are moving.


> the underlying results of that learning (and the process of learning itself) could not be any more different

To drill down a bit, I think the difference is that the child is trying to build a model - their own model - of the world, and how symbols describe or relate to it. Eventually they start to plan their own way through life using that model. Even though we use the term "model" that's not at all what a neural-net/LLM type "AI" is doing. It's just adjusting weight to maximize correlation between outputs and scores. Any internal model is vague at best, and planning (the also-incomplete core of "classical" AI before the winter) is totally absent. That's a huge difference.

ChadGPT is really not much more than ELIZA (1966) on fancy hardware, and it's worth noting that Eliza's was specifically written to illustrate superficiality of (some) conversation. Its best known DOCTOR script was intentionally a parody of Rogerian therapy. Plus ça change, plus c'est la même chose.


Why do we think that inside the 'weights' there is not a model? Where in the brain can you point and say 'there is the model'. The wiggly mass of neurons creates models and symbols, why do we assume that inside large neural nets the same thing isn't happening. When I see pictures of both (brain scan versus weights), they look pretty similar. Sorry I don't have latest citation, but was under assumption that the biggest breakthroughs in AI were around symbolic logic.


As I said, the model is vague at best. Regardless of how the information is stored, a child knows that a ball is a thing with tangible behaviors, not just a word that often appears with certain other words. A child knows what truth is, and LLMs rather notoriously do not. An older adult knows that a citation must not only satisfy a form but also relate to something that exists in the real world. An LLM is helpless with material not part of its training set. Try getting one to review a draft of a not-yet-published paper or book, and you'll get obvious garbage back. Any human with an equivalent dollar value in training can do better. A human can enunciate their model, and make predictions, and adjust the model in recognizable ways without a full-brain reset. An LLM can do none of these things. The differences are legion.

LLMs are not just generalists, but dilettantes to a degree we'd find extremely tiresome in a human. So of course half the HN commentariat loves them. It's a story more to do with Pygmalion or Narcissus than Prometheus ... and BTW good luck getting Chad or Brad to understand that metaphor.


First off, I'm not sure why this is the most upvoted comment. The OP explicitly praises AI, he just smells the same grifters gathering around like they did to crypto and he's absolutely right, it is the exact same folks. He isn't claiming the mind is metaphysical or whatever.

On your claim that the mind is metaphysical OR it is a NN, you have to understand that this extremely false dichotomy is quite the stretch itself, as if there are no other possibilities, that it isn't even a range or it could be something else entirely. One of the critiques people have of NN from the "old guard" is the lack of symbolic intelligence. Claiming you don't need it and fitting is merely enough is suspect because even with OpenAI tier training, just the grammar is there, some of the semantic understanding is lacking. Appealing to the god of the gaps is a fallacy for a reason, although it may in fact turn out to be true, potentially that just more training might be all that is needed. EDIT: Anyway, the point is assuming symbolic reasoning is a part of intelligence (hell, it's how we discuss things) doesn't require mysticism, it just is an aspect that NNs currently don't have, or very charitably do not appear to have quite yet.

Regardless, there isn't really evidence that "what brains do is what NNs do" or vice versa. The argument as many times as it has been pushed has been primarily driven by analogy. But just because a painting looks like an apple doesn't mean you can eat the canvas. Similarities might betray some underlying relationship (an artist who made the painting took reference from an actual apple you can eat), but assuming an equivalence without evidence just strange behavior, and I'm not sure for what purpose.


The main post was about burnout and hype. And I was just trying point out that things really are advancing fast and we are producing amazing things, despite the hype.

Like maybe the hype is not misplaced. There are grifters, and there are companies with products that are basically "IF" statement, and the hype is pretty nutz.

On other hand, some of this stuff is amazing. Don't let the hype and smarmy sales people take away from the amazing advancements that are happening. Just a few years ago some of this would have been considered impossible, only possible in the province of the 'mystery of the human mind'. And yet, here we are, and what it is to be human is being chipped away more every month, and yeah a lot of people want to profit.

Or more to the my main thought, a lot of heads down engineers that are cranking out solutions, do loose sight of how far they are moving. So don't get discouraged by the hype, marketing is in every industry, so why not stay in this cool one that is doing all the amazing things.


> The human is computation also, unless you believe in souls or other worldly mysticism.

I think it is incredibly sad that a person can be reduced to believing humans don't have souls. Do something different with yourself so you can discover the miracle of life. If you don't believe there is anything more to people and to the world than mechanical processes, I would challenge you to do a powerful spiritual activity.


By spiritual practice, do you mean something like studying the Skanda's or the 5 Aggregates? Or do you mean to open myself to the love of our lord and savior? It does make a difference in how you approach the world if your spiritual practice encourages insight, or if you are blinded by faith in a spiritual entity that is directing you.


A powerful spiritual practice like challenging your own limits and fears to the maximum, or meditation and fasting, or immersing yourself in a completely different environment from what you are familiar with until you know it truly. Or if these things sound too abstract, to take a strong dose of psychedelics alone or with others.

Religious texts are something that can be interesting after sensing some spirituality, but probably not before. I don't think anybody who is not spiritual can become so by reading religious texts.


It's just that you said I was sad because I don't think their is a soul. Because, I assume, you think that there is some mystical entity that is directing the body that is you, and a spiritual practice would lead to that conclusion. The soul would be some essence that is beyond the physical world, (a being from alternate universe maybe?). But then you speak of meditation and fasting. I would say that meditating has been what lead me here. Meditating, examining, observing ones own mind helps one see that there is no self, thoughts arise on their own. There is no core, non-detectable, mystical soul.

We give our minds too much credit, we keep arguing if AI is, or can ever be, conscious, without ever defining what consciousness is. I would say that humans aren't conscious in the way we think we are. There is no free will, we don't decide what we think about, if you think about thinking, where does the first thought come from?


Well, I'm looking at AI as an artist. The first notable thing I see is that I can pick out where all the refeferences come from and I find that boring. They are often not adapted to each other, there are obvious scale discrepancies and poor to little perspective. Cohesiveness is missing. Also by the arguments that I have seen, there is a very poor understanding of how artists work and develop their art. Artists transform materials, not existing art. Some artists may copy others but developing your own style is considered the gateway to making your best art. Art has many forms with a large portion of them being three dimensional. AI is a greatly limited tool that can statistically gather and render already existing imagery into a 2 dimensional format. It can be used as a tool to make art if the user has the skill to direct it with the proper prompts, but to equate that with how humans learn to create is to truly misunderstand the human process.


I wish I could help you more, but I don't know if I really can. But look at it this way: What are the odds that you have figured out everything about existence and there is nothing more to it than cold matter? Wouldn't you at least try to prove yourself wrong?


I wish I could help you too. Isn't that the crux of the problem, you think I'm so deluded, I can't be helped, and I think the same of you. So once again, the two sides of any faith based / mystical interpretation of reality can't prove anything. I can't prove there is no soul, and you can't prove there is one. Anything based on mystical faith is just someone's opinion.

What are the odds you have it figured out? Why can't you try to prove yourself wrong. The odds are you haven't figured it out either, so why have you stopped trying. You say the soul exists, so why is the onus on me to prove it doesn't, but you don't have to prove anything.


It hasn't to do with you, it is the limit of the medium. You can't prove a soul or anything spiritual through text. Or at least I can't. I don't think you're deluded.

But, I will argue that there is physical evidence for the soul and for the spiritual beyond our everyday comprehension. That physical evidence is psychedelics. If you take psychedelics once with a person who is dear to you, I'm certain you will come out on the other side much assured they have a soul, that there is much more to people than what you see in everyday life.


I see where you are coming from now. See, 'text' got us here eventually.

I'm more from Zen Buddhism background, so agree about not trusting 'text'. That language is limited for communication. I think a lot of the issues here, are just about miss-interpreting language.

But for Psychedelics, I have always fallen on the side that they can also cause delusion. I guess because they are mind altering, then potentially they are altering perceptions to be something even less real than someone had without psychedelics.

The other reason I have not depended on them, is because no matter the impressions they leave, however mind expanding, it is still isolated inside my own head. They don’t provide proof of anything outside myself. The results are still limited to the individual’s point of view. But also agree, that they can be valuable if someone is so buried in dogma it helps them break out to look around. So, guess for psychedelics, it depends on where someone is at, and trying to achieve.

With all addiction dogma aside, or arguments on what is addictive, or not, aside. My struggles to overcome addiction have led me to not trust mind altering substances. That even if our own un-altered perceptions are an illusion, so is the altered perception. So being in an altered state is not gaining ground on understanding.

On other hand Psychedelics do help with some addictions, so guess mileage can vary.


What is a powerful spiritual activity you’d recommend?


I'm pasting my response from above:

A powerful spiritual practice like challenging your own limits and fears to the maximum, or meditation and fasting, or immersing yourself in a completely different environment from what you are familiar with until you know it truly. Or if these things sound too abstract, to take a strong dose of psychedelics alone or with others.


What's sad about it?


I'm working on a project that uses GPT-3 and similar stuff, even before the hype. I think the overhype is really tiring.

Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.

That's tiring, and really annoying.

It's incredibly cool technology, it is great at certain use cases, but those use cases are somewhat limited. In case of GPT-3 it's good at generative writing, summarization, information search and extraction, and similar things.

It also has plenty of issues and limitations. Lets just be realistic about it, apply it where it works, and let everything else be. Now it's becoming a joke.

Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.


Finally, a take on chatgpt and similar LLMs I agree with!

I've criticized it whenever it gets brought up as an alternative for academic research, coding, math, other more sophisticated knowledge based stuff. In my experience at least, it falls apart at reliably dealing with these and I haven't gone back.

But man, is it ever revolutionary at actually dealing with language and text.

As an example, I have a bunch of boring drama going on right now with my family, endless fucking emails while I'm trying to work.

I just paste them into chat gpt and get it to summarize them, and then I get it to write a response. The onerous safeguards make it so I don't have to worry about it being a dick.

Family has texted me about how kind and diplomatic I'm being and I honestly don't even really know what they're squabbling about, it's so nice!


Haha that's amazing. This is exactly what I mean, for the right use cases it's absolutely amazing.

Good luck with the drama! Make sure to read a summary for the next family meeting haha.


The great part is I can just get it to summarize all the summaries, love how it's flexible like that.

Yeah I will be sure to read it before meeting them, would be awkward if they found out I was using it during one of the disputes, which was whether or not to keep resuscitating Grandma.

ChatGPTs stupid ethical filters made it so I actually had to type my response to that one all by myself.


> I'm kinda worried AI will get a scam/shitty product connotation.

Which has happened before. The original semantic/heuristic AI, most notably expert systems, over-promised and ultimately under-delivered. This led directly to the so-called "AI winter" which lasted more than two decades and didn't end until quite recently. It's a very real concern, especially among people who want to push the technology forward and not just profit from it.


> I'm kinda worried AI will get a scam/shitty product connotation

I think we're already there. A legion of AI based startups seem to be coming out daily (https://www.futuretools.io/) that offer little more than gimmicks.


You are probably right, kinda sad.

My last resort is to just remove all AI references from my marketing and just deliver the product.


> Just like with most of these hype cycles there is an actual useful interesting technology, but the hype beasts take it way overboard and present it as if it's the holy grail or whatever. It's not.

See also: Gartner hype cycle


Thanks! Very interesting. I guess our challenge will be surviving the inevitable dip.


> Also, a lot of products I've seen in the space are really really bad and I'm kinda worried AI will get a scam/shitty product connotation.

I agree with this, I feel like I've seen a lot of really cool technology get swept up in a hype storm and get carried away into oblivion.

I wonder what ways there are for the people who put out these innovations to shield them/their products from it?

Luckily I have a lot of faith in the OpenAI people - I hope their shielding themselves from the technological form of audience capture.


I think the hope is that unlike crypto, as others have said, clearly AI will have actual good applications, so the hope is the hype beasts, after they've burned through all the grifting they can, they can go off back to selling penis pills or whatever they were into last year (or may be pre 2020).


> GPT-3 it's good at generative writing

made up bullshit

> summarization

except you can't possibly know the output has any relation whatsoever to the text being summarized

> information search and extraction

except you can't possibly know the output has any relation whatsoever to the information being extracted

people still fall for this crap?


Agreed. Been testing out responses to parsing complex genomics papers (say, a methodology section describing parameters of some algorithm) and its mostly rephrasing rather than digesting and responding with useful information / interpretation. And it will use so many words and imbue so little to the conversation, yet appear like it's helping because ... words.


Yeah you can't use it for academic research, it's really, really terrible at it.

It will even claim it can generate citations for you too, which is pretty messed up because when I tried it just fabricated them replete with faked DOIs.

Where it shines is at squishy language stuff, like generating the framework of an email, paraphrasing a paragraph for you, or summarizing a news article.

It really is revolutionary at language tasks, but unfortunately the hype machine and these weird "AI sycophants" have caused people to dramatically overestimate it's use cases.


> complex genomics papers

I think it's fair to say this is not one of the use cases where it shines. It's not great at logic, it's also not that smart.

That's exactly what the hype does. Too big claims and then it gets dismissed when it inevitably doesn't live up to the hype.


I think this is sort of the other side of the hype, totally dismissing it is also incorrect imo.

Yes, it's overhyped, but it's not useless, it actually does work quite well if you apply it to the right use cases in a correct way.

In terms of accuracy, in ChatGPT the hallucination issue is quite bad, for GPT3 it's a lot less and you can reduce it even further by good prompt writing, fine tuning, and settings.

Can we just recognize it for what it is?


Someone called it a zeroday on human cognition, on the entire society so I am ready to recognize for that.


I had that since I was doing my masters in data science (5 years ago?). I love the models, the statistics and just the cleverness of everything but I just can't stand the "scene" anymore and moved almost entirely away from it. It's not as exciting as it was anymore.

When I started with the topic I watched a documentary with Joseph Weizenbaum ([1]) and felt weirded out that someone would step away from such an interesting and future-shaping topic. But the older I get, the more I feel that technology is not the solution to everything and AI might actually make more problems than it solves. I still think Bostrom's paperclip maximizer ([2]) is lacking fundamental understandings of the status quo and just generated unnecessary commotion.

[1] http://www.plugandpray-film.de/en/ [2] https://www.lesswrong.com/tag/paperclip-maximizer


Yes, PoW crypto is now a much more concrete example of the potential damage from poorly aligned utility functions, as well as the challenges in containing a system once it is released.

I'm finding the current hype cycle very frustrating from both sides. On one side there is frequent overplaying current capabilities, and cherry picked examples given as it they're representative. On the other side there is an over simplistic "AI is evil" reaction. It's hard to deny that progress in the past few years greatly exceeds expectations and could make a significant improvement to individual creativity and learning, as well as how we cooperate but so much of the discussions are fear based.


Same here, didn’t do a masters, but worked as a data scientist for a good while.

> I love the models, the statistics and just the cleverness of everything but I just can't stand the "scene" anymore

This really sums up my feelings too.


you mentioned you moved almost entirely from the AI / data science "scene." Where did you move to?


Being a CTO (doing manager stuff), regular coding. By moving away I also meant I don't follow along anymore and don't contribute to the projects I did so in the past. I just lost interest.


Its the next hype train since the blockchain/crypto/nft's hype train. The crypto train has arrived at its overheated, decentralized set of train stations that all have remnants of fraudsters, high end GPU boxes scattered about, torn up flyers with the promise of untold riches, people scampering about in the shadows muttering "defi" to themselves, people getting carted off in handcuffs.

Where will the AI hype train go? The internet as we know it already has so much SEO engineered content and content producers chasing that sweet, sweet advertising money that they could all be replaced by mediocre, half-true, outdated content created by bots. So do we have to wait until our refrigerators are "AI powered, predicts your groceries for you!" in order to see the usefulness?


>Its the next hype train since the blockchain/crypto/nft's hype train.

It really isn't. The business use cases even with current tech are pretty obvious. The problem with crypto/blockchain stuff was that it was useless. An emperor with no clothes.

Is there a more legitimate argument for why they're similar other than "hype" or am I missing something?


> Is there a more legitimate argument for why they're similar other than "hype" or am I missing something?

The tech industry runs on hype, so much so that analysts are told to evaluate them separately. Growth now, profit later, here's $2bn from Softbank, yada yada yada.

Companies like Theranos specifically positioned themselves as 'tech' so as to escape press scrutiny, particularly in sensitive industries like healthcare.

Emperors with no clothes can get very far; see Brian Armstrong and SBF (pre-collapse, but still not in jail). Can you imagine how far a well-funded AI hustler could get?


> The business cases are pretty obvious

??? What are they?

- bad code, with non obvious bugs? I would prefer the original slashdot/GitHub/blog post. Google used to do that.

- chat bots? The customer service will still be shit. Your problem will still not be solved. But I guess some call center staff can be fired. Customers will be very happy to never be able to speak to a human.

- Writing mediocre overlong content for google to place ads in? Just what the internet needs. It’s already day time tv.

Any more?


Think every day office jobs and try to figure out where you can use it as a productivity multiplier. You can figure out Microsoft's next step from there.


Yes, it is a productivity multiplier. By a negative number. Microsoft's next step is to sell access to it, preferably inside overpriced Microsoft Azure. And continue to brainwash people in media by making them to think that it will solve a single problem better than people.


I'd be interested to know what productivity areas Microsoft identified before launching Teams.


“Quick, figure out how to make the corpse of Skype even worse!”


Its the hype cycle. Other hype cycles were things like the dot com boom (and bust). I don't mean it as a comparison of technology merits just that we sometimes have to live through a hype cycle to get to a real understanding of where the technology might be actually useful. My snarky comment implies that we will have to wait until someone is advertising their AI powered refrigerator (i.e. get out of the hype cycle) to understand what real use cases are out there.


It is not about whether or not there's viable use cases or not. It is the hype added on top. Hype cycles are as old as IT. XML, Semantic Web, SOAP, Service Oriented Architecture, Enterprise Service Buses, Big Data, Serverless .. they all got their hype phase where you are bombarded with them to death, and then finally when that dies down some good applications remain.


I don't really understand your argument. Because other tech has been hyped in the past and let you down you think that will therefore extend to AI because...it just will? What precisely links AI to the semantic web or SOAP?

(and it's always about business use cases)


I didn't mean to say that the tech necessarily will let you down, just that the hype is a common phenomenon, even when there are already viable applications. AI hype cycle is peaking, and I have no doubt that disruptive tech comes forth from it after it subsides.


A symptom of our bubble-powered economy in general.


There is a lot of negativity towards the idea of AI in this thread, and I feel like someone has to say it: it is quite likely that in the near future computers will be better than almost all humans at almost all cognitive tasks.

If you have a task or are trying to accomplish something, and the way you do it is by moving a mouse around or typing on a keyboard then it is very likely that an AI will be able to do that task. Doing so is a more or less straightforward extension of existing techniques in AI. All that is necessary is to record you performing the task and then an AI will be able to imitate your behavior. GPT3 can already do this for text, and doing it instead with trajectories of screen, mouse and keyboard is not fundamentally different.

So yes, it is true that there is a lot of hype right now, but I suspect it is a small fraction of what we will see in the near future. I also expect there will be an enormous backlash at some point.


I think this sentiment that this will happen in the "near future" is the cause for exactly the sort of fatigue the author is talking about.

If you mean in the next year or two, I hate to disappoint you, but barring some massive leap forward, you are going to be wrong.

If you mean in the next hundred years, or maybe sometime in our lifetimes, sure. The chances it looks anything like chatGPT or GPT3 now though is laughable.

This isn't the future. This is a small glimpse into a potential future down the line, but everyone is talking like developers/designers/creatives/humans are already obsolete.


> If you have a task or are trying to accomplish something, and the way you do it is by moving a mouse around or typing on a keyboard then it is very likely that an AI will be able to do that task.

You don't need AI to move a mouse around or type on a keyboard. A simple automation is enough.

The value is not in moving a mouse or typing on a keyboard. The value is in knowing when and where to move the mouse and when and what to write on the keyboard.


> GPT3 can already do this for text

Kind of, it isn't fool proof. I use GPT3 and Chat GPT (not the same thing) almost daily, and there is quite a bit of error correction that I am doing. Still, it is really helpful.


AI leaves a bad taste in my mouth but I think it is because we have moved away from ML/Vision problems with a strong background in academic research, and high impact and purposeful development of these into products.

We are now exposed to companies hyping huge general purpose models with whatever tech is the latest fad, which resonates with the average person who wants to generate memes, etc.

This is impressive only at the surface level. Take a specific application: prompting it to write you an algorithm, outside of any copying-and-pasting from a textbook these models will generate bad/incorrect code and then explain why it works.

It's like having an incompetent junior on your team who has the bravado of a senior 10x-er.

That's not to say "AI" doesn't have a purpose, but currently it seems just hyped up by sales people looking for Series-A funding or an IPO cash-out. I want to see the models developed for specific tasks that will have a big impact, rather than the slight-of-hand or circus tricks we currently get.

Maybe that time is passed, and general models are the future and we will just have to wait until they're as good as any specific model that was built for any task you can ask of it.

It will be interesting what happens when these "general" models are used without much thought and their unchecked results lead to harm. Will we still find companies culpable?


I think you hit on some good points. It seems like in common language, AI has taken the meaning “general purpose”, rather than satisfying some criterion of the futurist definition.

Personally, I care very little about whether the machine is intelligent or not. If it actually happens in my lifetime, I believe it will be unmistakable.

I am interested in how people solve problems. If you built and trained a model that solves a challenging task, THAT is something I find noteworthy and what I want to read about.

Apparently utility is boring, and “just ML” now. There’s tons of academic papers I see fly under the radar probably because they solve specific problems that the average person doesn’t know exists. Much of ML doesn’t foray into “popular science” enough to hold general public interest.


I dread the coming "age of buggyness" when imprecise LLMs pervade UIs and make everything always a little broken.

I don't deny that LLM represent a coming revolution in computer interaction. But as someone who's already mastered the command line, programming, etc. I already know how to use computers. LLMs will actually be slower for me for a huge variety of tasks like finding information. English is so clumsy compared to programming languages.

I feel like for nerds like me "user friendlyness" is often just a hindrance. For me this has been the case with GUI in general, touch GUI especially, and probably will be for most LLM applications that don't fundamentally do something I cannot(like stable diffusion).


> AI is great. ChatGPT is incredible.

Imagine how the HN users who disagree with that feel. It is beyond fatiguing. I’m frequently reminded of the companies who added “blockchain” to their name and saw massive jumps in their stock price, despite having noting to do with blockchains¹.

¹ https://www.theverge.com/2017/12/21/16805598/companies-block...


I don't mind the companies that got a temporary bump from adding blockchain to the name.

I'm more concerned about the Twitter hype-men and women adding '.eth' to their name and singing DeFI praises all day long....and then quietly removing it without so much as a word, once the hype is dead and keeping the '.eth' makes you look like a sucker.

BTW a lot of influential people were on that train, current CEO of YCom being one of them.


I think this time is the good one. ChatGPT has reached a level where we finally can think of building actually useful products on top of "AI".

Note that nobody is pretending that ChatGPT is "true" intelligence (whatever that means), but i believe the excitement comes from seeing something that could have real application (and so, yes, everybody is going to pretend to have incorporated "AI" in their product for the next 2 years probably). After 50 years of unfulfilled hopes from the AI field, i don't think it's totally unfair to see a bit of (over)hype.


I really don't understand how engineers are having good experiences with it; a lot of the stuff I've seen it output w.r.t. swe is only correct if you're very generous with your interpretation of it (re: dangerous if you use it as anything more than a casual glance at the tech). W.r.t. anything else it outputs, it's either so generic that I could do it better, outright wrong (e.g. cannot handle something as simple as tic tac toe), or functions as an unreliable source (in cases where I simply don't have the background).

I wish I could derive as much utility as everyone else that's praising it. I mean, it's great fun but it doesn't wow me in the slightest when it comes to augmenting anything beyond my pleasure.


I'm a Civil Engineer with a modest background including some work in AI. I'm pretty impressed with it. It's about as good or better than an average new intern and it's nearly instant.

I think a big part of my success with it is that I'm used to providing good specifications for tasks. This is, apparently, non-trivial for people to the point where it drives the existence of many middle-management or high-level engineering roles whose primary job is translating between business people / clients / and the technical staff.

I thought of a basic chess position with a mate in 1 and described it to chatGPT, and it correctly found the mate. I don't expect much in chess skill from it, but by god it has learned a LOT about chess for an AI that was never explicitly trained in chess itself with positions as input and moves as output.

I asked it to write a brief summary of the area, climate, geology, and geography of a location I'm doing a project in for an engineering report. These are trivial, but fairly tedious to write, and new interns are very marginal at this task without a template to go off of. I have to lookup at least 2 or 3 different maps, annual rainfall averages over the last 30 years, general effects of the geography on the climate, average & range of elevations, names of all the jurisdictions & other things, population estimates, zoning and land-use stats, etc, etc. And it instantly produced 3 or 4 paragraphs with well-worded and correct descriptions. I had already done this task and it was eerily similar to what I'd already written a few months earlier. The downside is, it can't (or rather won't) give me a confidence value for each figure or phrase it produces. ...So given it's prone to hallucinations, I'd presumably still have to go pull all the same information anyway to double check. But nevertheless, I was pretty impressed. It's also frankly probably better than I am at bringing in all that information and figuring out how to phrase it all. (And certainly MUCH more time efficient)

I think it's evident that the intelligence of these systems is indeed evolving very rapidly. The difference in ChatGPT 2 vs 3 is substantial. With the current level of interest and investment I think we're going to see continued rapid development here for at least the near future.


I can't speak to the rest of what you wrote because I couldn't be further from the field of civil engineering but if you feel impressed with it on chess, ask it to play game of tic tac toe; for me it didn't seem to understand the very simple rules or even keep track of my position on the grid.

There are so few permutations in tac tac toe that it's lack of memory and lack of ability to understand extremely simple rules make it difficult for me to have confidence in anything it says. I mean, I barely had confidence left before I ran that "experiment" but that was the final nail in the coffin for me.


This is like complaining that your computer isn't able to toast bread. It's a language model based on multicharacter tokens, outputting grids of single characters is not something you would expect it to succeed at.

If you explained the rules carefully and asked it to respond in paragraphs rather than a grid, it might be able to do it. Can't test since it's down now.


You're acting like it's a grid of arbitrary size and an arbitrary amount of characters. It's a 3x3 with 2 choices for each square.

Neglecting that (only because it's harder to navigate whether I should expect it to handle state for an extremely finite space; even if it's in a different representation than it's directly used to), I know I saw a post where it failed at rock, paper, scissors. Just found it:

https://www.reddit.com/r/OpenAI/comments/zjld09/chat_gpt_isn...


Let's talk about what ChatGPT (or fine-tuned GPT-3) actually is and what it is not. It is a zero-shot or few-shot model that is pretty good at a variety of NLP tasks. Playing tic tac toe or chess is not a traditional NLP task so shouldn't expect it to be good at that. But board games can be completely played in a text format so it is not unexpected either that it can kinda play a board game.

If GPT-3 was listed on Huggingface, its main category listing would be a completion model. Those models tend to be good at generative NLP tasks like creating a Shakespeare sonnet about French fries. But they tend not to be as good at similarity tasks, used by semantic search engines, as models specifically trained for those tasks.


That's a core problem with this. If people with expertise can't even tell us clear boundaries of its truth, how is anyone else going to come to rely on this for that purpose. I mean, you could say you defined a fuzzy boundary and I shouldn't trend towards that boundary from the wrong direction (re: text games that use different tokens than the ones it was trained on) but, how will I know if I'm too close to this boundary when I'm trending from a direction of doing things it's supposed to be good at?

It can't play tic tac toe, fine. But I know it gets concepts wrong on things I'm good at. I've seen it generate a lot of sentences that are correct on their own, but when you combine them to form a bigger picture, it paints something fundamentally different than what's going on.

Moreover, I've had terrible results with it as something to generate creative writing; to the extent that it's on par with a lazy secondary school student that only knows a rudimentary outline of what they're writing about. For example, I asked it to generate a debate between Chomsky and Trump and it gives me a basic debate format around a vague outline of their beliefs where they argue respectfully and blandly (both of which Trump is not known for).

It's entirely possible I haven't exercised it enough and that it requires more than the hours I put into it or it just doesn't work for anything I find interesting.


I agree that the state of the art isn't ready yet for general consumption. I think GPT-3, etc are good enough to help with a wide range of tasks with guardrails. To clear, I don't mean guardrails around racist language, etc which is a separate topic. Rather guardrails around when to use the results because of limitations and accuracy.

For example, let's say you have a website that sells clothes and you want to make the site search engine better. Let's also say that a lot of work has been done to make the top 100 queries return relevant results. But the effort required to get the same relevance for the long tail of unique queries, think misspellings and unusual keywords, doesn't make sense. However you still want to provide a good search experience so you can turn to ML for that. Even if the model only has a 60% accuracy, that's still a lot better than 0% accuracy. So applying ML queries outside the top 100, should improve the overall search experience.

ChatGPT/GPT-3 has an increased the number of areas where ML can be used but it still has plenty of limitations.


The fact that i can use this tool as a source of inspiration, or a first opinion on any kind of problem on earth is totally incredible. Now whenever i'm stuck on a problem, chatgpt has become an option.

And this happens in the artistic world as well with the other branch of NN : "mood boards" can now be generated from prompts infinitely.

I don't understand how some engineers still fail to see that a threshold was passed.


I've literally asked it to generate stories from prompts and, it has, without fail, generated the most generic stories I have ever read. High school me could have generated better with little to no effort (and I don't say that lightly) and I'm not a good writer by any means.

Moreover, it's first opinion on the things I'm good at has been a special kind of awful. It generates sentences that are true on their face but, as a complete idea, are outright wrong. I mean, you're effectively gaslighting yourself by learning these half truths. And as someone with unfortunate lengthy experience in being gaslit as a kid, I can tell you that depending on how much you learn from it, you could end up needing to spend 3x as much time learning what you originally sought to learn (if you're lucky and the only three things you need to do is learn it very poorly, unlearn it and relearn it the right way)


My experience was more 50% bullshit 50% fact. I did however explicitly forbid members of my team at work to use its code answers for subjects they weren't already experts in.

However I'm not advocating using its answers directly, but more as a source of inspiration.

Now everybody is aware of the problem of chatGPT not "knowing" the difference between facts vs opinion. It does, however seem a less hard features to add than what they've already built (and MS already pretends its own version is able to correctly provide sources). Future will tell if i'm wrong..


I agree. Even understanding its limitations as essentially a really good bullshit generator, I have yet to find a good use for it in my life. I've tried using it for brainstorming on creative activities and it consistently disappoints, it frequently spouts utter nonsense if asked to explain something, code it produces is questionable at best, and it is even a very boring conversation partner.


I think it’s less like the crypto craze than the PC, web, or smartphone “crazes”, where businesses starting incorporating each of the above into everything.

In other words, if you’re fatigued already, I have some bad news regarding the rest of your life.


If you're tired of AI now you're gonna hate where we are going. Strap in!

(…or take a good step back from the news cycle, check in once or twice a week instead of several times daily. News consumption reduction is good for mental health.)


This is something any crypto-bro would have told you in 2017.


Really don't understand the constant crypto comparisons. We have one technology that hasn't provided any benefits whatsoever in 10 years and one that has provided real utility from day one. One deserves the hype, the other doesn't.


Bitcoin has provided hundreds of billions in value, chatGPT has provided me with one hundred times the spam.

I'm actually optimistic about both crypto and AI, but I see the authors point. I really don't think the comparison is hard to spot between the AI hype and, say, the NFT hype from 1 year ago.

A lot of people are claiming that these technologies will imminently change everything, fundamentally. In reality, both of them are just neat things that give us a glimpse of what the future may hold, and hold a bunch of promise, but aren't really changing anything fundamentally. Not yet, at least.


Haha! Yeah good idea.


Its part of overall "tech hype fatigue", think of all the waves upon waves, big data, social apps, crypto/web3, self-driving cars, virtual reality, AI etc,

At the same time people's actual quality of life or economic standing is going nowhere, there is fragility that bursts in the open with every stress, politics has become toxic and the environment gets degraded irreversibly.

Yet people simply refuse to see and they keep chasing unicorns.


Everyone is a temporarily inconvenienced multi-millionaire. They only need to get in on the next big thing and ride their way to a comfortable life at the top.


It's called being transfinancial. You feel like a rich person but are born in the body of a poor one.


Yes, many people are feeling AI fatigue. AI can be overwhelming and many people feel like they are being bombarded with information that they don't understand. People are also concerned about how AI is being used and its potential implications for privacy and security.

Sorry, I couldn't help; that is the ChatGPT response to your question. More informatively, AI is clearly at the height of inflated expectations. It will provide a helpful tool. However, it will not push people out of jobs. Furthermore, right now it gives a much better search experience than Google, as it is not yet filled with ads or has been gamed extensively by SEO. It is doubtful this will stay like this in the future.


I could tell by the start of the second sentence.


ChatGPT overuse of the word "overwhelming" and a couple other similar words is very characteristic. I think it comes from the "political correctness"/"provide kind answers" prompts it is bombarded with during training


That first paragraph. It is big thing that machine can generate something like that but in reality it feel like it brings just noise. Not sure why anyone expect this to improve SEO noise.



I think Machine learning already went through the trough of disillusion around 2016-2018 in computer vision and around 2018-2020 for voice assistants.

I think we're now past that and people can see that tools like ChatGPT are powerful enough to be applied in many pre existing contexts and industries in unpredictable and inventive ways without huge amounts of manual configuration, which makes it more exciting.


ML/AI is a repeat offender (for that matter, so is The Almighty Blockchain; it managed a few hype cycles under slightly different identities; blockchain, ICO, NFTs, and so on). Remember in the late 90s when Microsoft and Apple both appeared fully convinced that voice would be the primary interface with computers imminently? There was also a large brief chat agent bubble a few years back.


Machine learning is way too generic of a term. Everything from linear regressions to neural models is technically "machine learning".

Language models are right now at the very top of the peak of inflated expectations. It's still too early to tell what the real impact will be, but it won't be even remotely close to what you read on the headlines.

Far more impressive technology (like Wolfram Alpha) has existed for almost a decade now, and it's directly comparable to language models for many applications.

My guess is they will end up being something like Rust. Very cool to look at, little impact on your day-to-day.


If you can jump around without prediction which point is next, the hype cycle is useless. There are terms ppl use for things that are en vogue. There is no hype cycle.


ChatGPT is, of course, a great piece of software, but the huge hype is probably what it will be best remembered for. Also, since currently, AI is the exclusive playground of big corporations, to me it's a bit puzzling how some people can get so excited (and maintain that excitement) over something that they cannot control and have little hope of building it by themselves. I guess some are just more in love with technology, than with other things in life. Because, as everyone is probably well aware by now, more technology is the solution to every problem that ever faced mankind and will finally fix everything. :)


I assume that there will be an open and freely available model as large as ChatGPT within a year or so. Training costs are prohibitive but what about NSF grants?


I don't know about the NSF, or when will governments get in on AI, but you're probably right, the technology will become open source in a while, as it has happened in the past.

It looks much less likely for the cost of developing and training an AI system to come down for the time being, making it out of reach for most individuals.

When the PC revolution was happening, everyone interested had a good chance of getting in, they just needed some money to buy/rent a computer and learn to use it or program it.

Compared to that, the AI revolution doesn't seem to have the same quality.

The barrier to entry seems much much higher this time.


I do think that governments will have an interest in keeping around models that they're in control of, just like there is publicly funded boradcasting, you may want to be able to control all the biases of a widely-used model and not just import it from somewhere.


The HN crowd would do well to take a step back and look at this from a little different perspective. We are able to see "AI" and machine learning as the very young and imperfect technologies that they are. That said, ChatGPT is the first time a technology like this has been even REMOTELY available to the vast majority of the public, and democratizing this capability even in the small box of a chat window is wildly disruptive. Even after explaining that it probably isn't a great idea to use it for generating 100% factually reliable content, everyone I've shown it to has come up with ways they would use it to make small-but-meaningful improvements in some area of their lives. Consider an immigrant owner of a landscaping company who isn't super confident in their English but needed to respond to a customer to clarify exactly what they need to do on a job site. Did they close the deal solely because of ChatGPT? Hard to say for sure, but it saved an hour of productive time and likely made it easier for the customer to be a reference/word-of-mouth fan.


ChatGPT is great, but it's being hyped up so much right now. We've got AI bros coming in the scene trying to sell everybody a new product. Before the crypto craze, it was big data. I probably missed something in between.


ChatGPT has certainly made a splash, but it's part of a larger trend. I started following developments in modern AI when Kevin Kelly tweeted[1] this in 2016:

> The business plans of the next 10,000 startups are easy to forecast: Take X and add AI.

I think the AI hype cycle isn't done building. A few days ago, Paul Graham tweeted[2] this:

> One of the differences between the AI boom and previous tech booms is that AI is technically more difficult. That combined with VC funds' shift toward earlier stage investing with less analysis will mean that, for a while, money will be thrown at any AI startup.

[1]: https://twitter.com/kevin2kelly/status/718166465216512001

[2]: https://twitter.com/paulg/status/1623060319403905026


It was actually briefly ML again; there was a chat agent VC funding bubble before the main crypto VC funding bubble.


I thought you were going to use "AI Fatigue" in the same sense as "JS Fatigue", and I was going to agree a lot.

I've got "AI Fatigue" not in the sense that it is overhyped, but just like "JS Fatigue": It is all very exciting, and new genuinely useful and impressive things are coming up all the time, but it's too much to deal with. I feel like it's difficult to start a product based on AI these days due to the feeling that it will become obsolete next week when something 10x better will come out.

Just like with JS Fatigue back in the days, the reasonable solution for me is something like "Let the dust settle a bit before going in the latest cool thing"


I’m not. I’ve been using OpenAI’s API a lot for work and it’s made tasks easy that would previously have been very challenging. Some examples:

- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends

- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.

- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.


I'm not at all tired of AI, but I am tired of all the sales/marketing/business people taking our AI, misunderstanding it, pretending it does all sorts of things that it absolutely does not do and then also not being willing to being educated about how things _really_ work under the hood.

It's the same kind of people that were hyping cryptocurrencies in the past. People who understand nothing about the technology, but shout the loudest about how amazing it is (probably to make money off of it). Those are also the kind of people that will be the cause of the next AI winter.


With all respect for good salesman who can understand a customer and recommend the right solution: Those other types, which just verbally hype something until they get their lead, those may be replaceable by said ai some day...


We're in the middle of an AI hype. Much like with previous hypes (crypto etc), time will tell whether it was worth it. Unless you're chasing gold or selling shovels the only thing to do is just to wait it out.


I'm tired of people saying they're tired. I use ChatGPT every day and it provides results of a quality that has little to do with Google, even though I'm aware that it sometimes fibs to me, makes up names for functions that don't exist, or makes mistakes. There's hype, but I think it's much more deserved than the hype around cryptocurrencies


There are plenty of people who live their entire life with bitcoin - they get paid in bitcoin, pay their bills in bitcoin, and send each-other money in bitcoin.

I think it's safe to say your experience is an outlier, just like theirs are.

I'm happy it's working for you, but if you really do use it every day, you surely can understand the points where it doesn't live up to the hype -- or at the very least, how it is not for everyone.


I would say, at least it’s not as predatory and unethical as crypto, where the people involved are knowingly harming others.

But it seems like the current trendline for “AI” is going to be worse. Why be excited about building tools that will undermine democracy and cast doubt on the authenticity of every single photo, video, and audio clip. Now it can be done cheaply, by anyone. It will become good enough that we cannot believe any form of media. And also make it impossible to determine if the written word is coming from an actual person. This is going to be weaponized against us.

And at the very least, if you think blogspam sucks now, wait until this becomes 99.9999% of all indexed content. It’s going to jam all of our comms with noise.

But hey it looks great on your resume, right?

Maybe I’m too cynical, would love for someone to change my mind. But you are not alone in your unease.


To be honest, I see some positivity in that.

> Now it can be done cheaply, by anyone. It will become good enough that we cannot believe any form of media. And also make it impossible to determine if the written word is coming from an actual person. This is going to be weaponized against us.

We shouldn't believe any form of media straight away. We only do so because we think faking it is hard and why should one do. Being able to produce it cheaply could make people more attentative and skeptical of things around them. Blogspam sucks mostly out of consumers belief that this is something that was written by a person who deeply cares about them. Average internet consumer consumes shitty internet not because he is ignorant, but because he or she doesn't know enough to care.

But maybe I'm to optimistic, I just think people are not aware of stuff around them


Let's imagine a scenario:

There is a state of emergency presidential address. In Video A, the politician says X Y Z. In Video B, the politician says A B C. Both videos have equal credibility. The videos show no artifacts from tampering. The alteration is undetectable by experts. The broadcast has dire consequences in a divided country.

50% of channels are pushing Video A, 50% of channels are pushing Video B.

We are now in a position where the public actually cannot determine which video is authentic. The politician could broadcast a new statement, to clarify the validity of the first video. But, you could just as easily fake that too, to publish a statement that declares the opposite.

So, then you load up Hacker News or wherever, to determine for yourself what the hell is going on. But someone spins up 1,000 bots to flood the comments in favor of Video A, and someone else spins up 1,000 bots to flood the comments in favor of Video B. These comments are all in natural language, all with their own individual idiosyncrasies. It's impossible to determine if the comment is a bot. And because the cost is essentially free, these bots can be primed for years, making mundane comments on trivial topics to build social credibility. Actual humans only account for maybe 1% of this discourse.

Now imagine: our entire world operates like this, on a daily basis, ranging from the mundane to the dramatic. How about broadcasting a deepfake press statement from a CEO to make a shorted meme stock crash. If there are financial/political incentives to do so, and the technological hurdle is insignificant, these tools will be weaponized.

So how do we "not believe the media", do we all have to be standing in the same room together where something notable happens?

I understand that there could be upsides, the world isn't all doom and gloom. But, I think engineers get myopic, and do not heed the warnings.


If some person A decides to pay person B in crypto money instead of dollars or pesos, because it's more convenient/cheaper/faster, how is that harming you? That's their business. Nobody is forcing you to use crypto.


The world has some cool new toys and I'm glad people are playing with them. I hope it only becomes increasingly accessible with time. Yes, there's going to be a ton of snake oil and disappointments if you listen to the people looking to get rich quick, but I'm excited for what might come from it in the end.

In the meantime, all the attention and media is easing people into thinking about some difficult questions that we may end up having to deal with sooner than we'd like.

The hype can be annoying, and I'm sure they'll be suckers who lose a lot of money chasing it, but I'm also sure AI will get better, and be better understood too, as a result of all of the attention and attempts to shoehorn it into new roles and environments.


Not really AI itself but I am already sick of people asking ChatGPT then posting it in the comments of HN/Reddit.

It just feels like a waste of time having read the comment. Even if the information is there I don't trust the user to be able to distinguish between true or confident false. If it's not my skillset or knowledgebase I assume it's wrong because I can't tell and can't ask followup questions.

Me using it as an assistant? Love it. Others using it as an assistant? I don't trust them to be doing it right.

In any case I want to read your opinion, copy paster, not a robot I could just ask in my own time! Just don't post if you've got no thoughts lol


In the early 2000s people were also annoyed about all the internet hype. "DotCom this and DotCom that" and it was stupid that a company could announce they were adding a DotCom to their name and the stock would go up a bunch. So yes, it is annoying that all these crypto scammers and entrepreneurs have put a wrapper around GTP API call and hype it up.

BUT the rate of change in AI is enormous and it will be a much bigger deal than the internet over the next 10 years. Not because of API wrappers, but because the cost of many types of labor will effectively go to zero.


>In the early 2000s people were also annoyed about all the internet hype.

Well, they were right...


They were right? Even if we only consider the ability of the internet to enable remote work, the internet's impact to human life is astronomic.


Astronomically annoying more like it...

People need to get off being 24/7 wired and chill more. The last thing people, society, and the environment needs is the kind of changes the internet brought...


I never even fully recovered from the "Facebook, but for X" fad.

At least all the previous crazes didn't threaten to replace humans, so I suppose this tech hype bubble is arguably even more irritating.


The hype will die down fairly quickly. But this technology is obviously a huge deal. We've found a practical algorithm to turn more powerful hardware into better results. And hardware was still ramping up at an astonishing rate last time I checked.

It seems more likely that we'll surpass the hype than not in the next few decades. I think people have forgotten how quickly technology can move after the last 20 years of relative stability where more powerful hardware didn't really change what a computer can do.


It's been happening for a decade and I've learned to ignore it.

Cloud for this cloud for that! Blockchain for this blockchain for that! Big Data for this, big data for that! Web scale all the things!

The marketing driven development is exhausting and has done nothing to improve technology. This happened because of 0% interest rates and free money. People have been vying for all the VC money by creation solutions looking for problems, which end up being useless solutions for which no problems exist


100% this. This is marketing to the highest level. Heck, look at MS now. They knew this was the opportunity to associate bing to the hype and with Google making a mistake with its "infamous" video, they won round 1.

Let's wait until the end of the year and see how much will this wave will hold up.


It's just the regular wantrepreneurs wave when a new shiny thing is released, we had the same with crypto, give it a few months they'll crawl back from where they came


I have a bit of AI fatigue around this wave of tools, but also understand why they are garnering so much attention. Many of the innovation hype categories of the past decade have appeared stuck in the 'early days but just wait...' phase. Self-driving cars, drone delivery, crypto as a currency, crypto as a(n) _____, plant-based meats, virtual reality, etc. While there has been great progress in each of these areas, not one has yet matched market demand with current capabilities in a way that enables it to become a 'game changer.'

To the general public, ChatGPT and the Image Generators 'just appeared,' and appeared in a very impressive and usable form. Of course there were many waves of ML advances leading up to these models, but for many people these tools are their first opportunity to play with ML models in a meaningful way that is easy to incorporate into daily life and with very little barrier to entry.

While impressive and there are many applications, my questions surrounding the new AI tools relate to the volume of information they are capable of producing and our capacity to consume it. Tools can be used to synthesize the information, tools can act on it, but there is already too much 'noise.' There is a market for entertainment tailored to exact preferences, but it won't provide the shared cultural connection mass media provides. In the workplace, e-mails and documents can be quickly drafted. This is a valuable use case, but it augments and increases productivity. It will lower the bar necessary for certain jobs, and it will increase productivity expectations, but it will become a tool like Excel rather than a replacement like a factory robot (for now).

The Art of Worldly Wisdom #231 - Never show half-finished things to others. <- ChatGPT managed it's release perfectly in this regard.


I think with any technology, there will always be individuals looking to make a quick buck. Whether that's a fledgling startup trying to woo investors, big tech cos looking to pump their share price, or your average Twitter/LinkedIn influencer peddling engagement bait.

IMO AI has reached this stage of its lifecycle. There have always been, and still are, valid use cases for AI, but I think the GPT-3 inspired applications we've been seeing as of late are no more than impressive tech demos. It's the first time the general public has seen a glimmer of where AI can go, but it really is just a glimmer at this point.

My advice is to keep your head down and try to be selective with the content you engage with on AI. It seems like every feed refresh I have some unknown Twitter Verified account telling me why swaths of the population will be out of a job soon. The best heuristic I have so far is to ignore AI-related posts/reshares from names I haven't heard of before, but of course that has obvious drawbacks.


What winds me up is the mis-branding, sometimes deliberate sometimes not (which one is worse?!), of basic computer processing as "AI".

It's not AI it's an IF statement for crying out loud :-(

But this is the industry we're in, and buzzword-driven headlines and investment are how it goes.

Actual proper AI getting some attention makes a pleasant change tbh :-)


I disagree; consider the use of the term "video game AI", which historically at least has just been a bunch of _if_ statements chained together. This is totally valid, it's an example of AI without machine learning.

The thing is that AI is just about the most general term for the type of computing that gives the illusion of intelligence. Machine learning is a more specific region of the space of AI, and generally is made of statistical models that lead to algorithms that can train and modify their behavior based on data. But this includes "mundane" algorithms like k-means clustering or line-fitting. Deep learning (aka neural networks) is yet a more specific subfield of ML.

I think the term AI just has more "sex appeal" because people confuse it with the concept of AGI, which is the holy grail of machine intelligence. But we don't even know if this is achievable, or what technology it will use.

So in terms of conceptual spaces, we can say that AI > ML > DL, and we can say (by definition) that AI > AGI. And it seems very likely that AGI > ML. But it's not known, for instance, whether AGI > DL, ie, we don't know for sure that deep learning/neural networks are sufficient to obtain AGI.

In any case, people should put less weight on the term AI, as it's a pretty low bar. But also yes, the term is way over hyped.


I'm thinking of cases such as colleagues selling as "ML" something they were then forced to admit as "we use SQL to pick out instances of this specific behaviour we knew was happening". Embarrassing all round.

As folks that work in tech we can tell the difference between stuff that's got some form of depth to it in "proper" AI: ML, DL, AGI as you suggest, vs the over-hyped basic computation stuff. And the selling of the latter as the former can rankle.


I feel like AI-scientists themselves are partially to blame for this. For starters, AI does not 'learn' like a human learns. But still many of the main terms of the field are based on learning: terms like 'learning rate', 'neural networks', or 'deep learning' are implying that there's some kind of being which learns, not just a very complicated decision tree. It's not all the fault of hype marketing people!


> AI-scientists themselves are partially to blame for this

They are not addressing the public or swaying opinion


> It's not AI it's an IF statement for crying out loud :-(

https://arxiv.org/abs/2210.05189 but all NNs _are_ if statements!


From my side there are three feelings that fight in me: 1. it's incredible how this AI performs drawing and writing 2. will it take my programmers work? 3. what if this AI makes a mistake?

I think putting AI inside everything will give us opportunity to experience first-hand what is a local extremum of multidimensional function and how it differs from global extremum. Our paper gets eliminated because some AI-based curriculum vitae review glitch. Our car lost a wheel because computer vision failed (or lose our heads like that one owner of Tesla )... Most scary for me is that we are starting to build more and more things of which we wouldn't be able to understand the inner workings. Hence there might be an intelligence crisis creeping slowly into our civilisation, and then bam... like in Andrzej Zajdel's Van Troff's Cylinder


You are hardly alone, but you are also likely in the midst of an actual paradigm shift, so the "ecosystem" does what ecosystems do. ie: herds stampede, flocks flock, scavengers scavenge, parasites uhh parasite.

It will be increasingly tiresome until it becomes commonplace, then the disastrous consequences will become the next tedium.


I’m very excited about the methods themselves, but I’m so tired of the manic “vibes” around them——-and how they’re starting to affect related fields too. I work at the border between neuroscience and AI, and there are some undeniably cool developments but there’s also so much hype and overkill.

In a better world, it’d be possible to occasionally pause, take a breath and think about what the models are actually doing, how they’re doing it, and if that’s what we want something to do. However, it’s hard to find space to do so without getting run over by people “moving fast” and breaking things and feels like doing the hard corrective work is so much less rewarded.


I feel like there will be some lucrative opportunities somewhere here to exploit the fallout from this hype. I get the fatigue, but our incomes are basically critically dependent on investor FOMO.

I'd rather we have bitcoin crazes, scaling crazes, nosql crazes and GPT crazes than this industry commoditizes itself to hell and I have to spend the rest of my career gluing AWS cognito to AWS lambdas for $55k / year.

At the same time I'm pretty sure that it will wildly change any industry where creativity is critically important and quality control either isn't that important or can be done by amateurs. There is substance at the core of the hype.


Absolutely not.

It seems too exciting to me and I am eager to see more AI. It's fascinating stuff.


Happy to find a fellow soul. I'm fatigued of the complaints of AI fatigue - especially where the complaints aren't based on recent (last year or so) first hand use.

It's bold (to put kindly) how lengthy some of these critical comments are from folks who later in the thread admit to not personally used Copilot (for example) much themselves.

The quality of LLM output can wildly vary based on what prompts (or series of prompts) are used.


There probably a koan for it, but separate the commercial from the noncommercial.

I'm excited for these emerging technologies, but I don't care about any of the products people want to sell based on them. I've spent the past 27 years developing zero-effort self-filtering against spam and hucksters, so I'm not even aware of any AI startups, just as I can't tell you the names of any Bitcoin exchanges. That's just not in my sphere, and I'm not missing out.

Hunker down and have fun. It's incredibly accessible, and you likely have more than you need to get started making it work for you.


I too feel overwhelmed by this sudden rush towards *GPT. The content generated by AI is slowly erasing the line between the creative content and computer generated content. I remember last year when so many people earning or trying to earn by creating art forms to sell as NFT. Once Dall-E landed, then the originality quotient of any creative content is lost. Likewise, ChatGPT is going to erase the originality in text content. Once internet is mixed with AI generated content, then there is no going back. We can't find what's real work, what's AI generated.


Yes but there's nothing new in what you say. Whenever something new comes out, people try to capitalize on the buzzword, even if what they're doing has zero relevance in practice. The whole "X but in Y" thing reinvents itself all the time. "X but in Rust". "X but on the blockchain". "X but with Neural Networks". "X but with nanobots". "X but quantum".

Best you can hope if you're a "Y" person is for the marketers to get bored of the current Y and jump to the next one, leaving yours alone.


If you are getting AI fatigue, you never really scratched the surface and limited yourself to to the hype-train of AI, and not actual AI.

AI is wide and deep, and its proper uses are so so far removed from mainstream media and the hype-train.

AI still has too many undiscovered areas of usefulness to the degree that it will nothing short of transform those areas.

But you hear most of the times about Stable Diffusion, see melted faces and weird fingers, and screenshots of ChatGPT.

These, wrt area and width, are nothing compared to what is possible.

So, no, I am not AI fatigued as I don't pay much attention to these hypes at all.


Although I understand your sentiment, I think it’s an inevitable phase in the development of any new groundbreaking technology.

People are still trying to figure out what the new AIs can and can’t be used for.

Some people will try to build ridiculous products that don’t work, but that’s just part of the learning process and those things will be weeded out over time.

There’s no ‘clean’ path to finding all the useful applications of these new models, so be prepared to be bombarded with AI powered tools for a few more years until the most useful ones have been figured out.


This happens during all hype cycles, with one big difference:

While crypto or VR tech still hasn't arrived in our daily lives, most of my friends are already using tools like ChatGPT on a regular basis.


It's very much the new blockchain. For the next year or so everything will have "AI" in the description because it produces a pavlovian response in VCs, then it'll move onto something else. So goes the tech industry.

None of this is new; there's a special magic phrase to attract VCs that changes every few years, and for now AI is it (we've actually been here before; there was a year or so a while back when everything was an "intelligent agent"/chatbot).


AI for actual applications, where mistakes are costly, is becoming an idiot marker. As "crypto" for projects other than non-decentralized finance or "smart" contracts, has already become. FOMO is great on this though, so every major investor/consultant is no starting to to tell you to get shitcoin.

AGI could be ML driven, most likely it is not. Neuronal nets are still AI tech. Even Bayesian inference is weakly AI tech.

The public always misuses words. Words change to match that meaning.


In the last ~15 years AI has been on the hype train but also had moments when it started to die down ( remember chat bots? ).

ChatGPT is the "new" booster shot, it's a hell of a boost and this one might stick. What will not stick is the copious amount of wishful thinking and bullshit the usual suspects are bringing in. ChatGPT is a godsend after crypto went bust and the locusts had to go somewhere else.

I suspect we will have to endure a crypto-craze like environment for a couple years at least..


From an engineering perspective ChatGPT makes some ridiculous errors and at times seems to fabricate random guesses. When these are pointed out it simply apologises and says "oh you right", even if it is being fed a lie.

When asked for references it cannot refer to any. Scientifically useless?

Until AI can filter out fact from fiction, it will continue frustrate the technical people who rely on absolute truths keep important systems running smoothly.


Me too. Especially since ChatGPT, the overhype has gone wild. I think it's because it's the first Chatbot, that repeatedly solves the Turing test for the masses. What annoys me the most, is that people fall for the illusion of talking to an intelligent being and the media (i've read) does not seem to hesitate to debunk it.

That uncritical handling along with a growing offer can lead to the next big bullshit bubble.


I think any research project, when it gets successful enough, and complex enough, we stop seeing the future potential, and are annoyed we're not getting what we want from it.

Machine Learning research isn't "for us." Let the researchers do what they do, and toil away in boring rooms, and eventually, like the internet slowly did, it will be all around us and will be useful.


While I agree the name AI is pretentious, I personally and professionally embrace the technology.

Personally I enjoy creating language models and agent networks, at work I make predictive models so.. :)

Even if I didn't find the tech fascinating and especially the new emergent features of the big LMs, I would be left in the dust professionally if I ignored it. The tech really works for a lot of stuff.


I understand the sentiment, but I'm trying actively to get away from all the hype and focus on the capabilities it has today. It's being useful for a bunch of people, there are some threads on it such as: https://news.ycombinator.com/item?id=34589001


Every time I read people dismissing AI for the same tired old reasons, I am reminded of this line from the Psalms: “the Lord knows the thoughts of men, that they are vanity”.

The only thing we can definitely do better than machines is sad, proud sophistry. “Not real understanding, not real intelligence, just a stochastic parrot”. Sure, keep telling yourself that.


You may want to try to mentally reframe it so it doesn't bother you as much because it's not going away any time soon.


People are loud, that is the nature of public discourse. Ignore the noise. Focus on the things you find personally interesting - if they are lindy proof it does not hurt (this includes maths, science, any fundamental computational problem like ray tracing for example). Use new stuff you personally find interesting or usefull.


AI fatigue? Nope, far from it.

I asked some general questions to ChatGPT, and it gave me pretty coherent answers. But when I asked really specific question like "How to rewrite Linux kernel in Lisp", then it gave me seemingly gibberish answer.

This was about 2 months ago, BTW. Maybe ChatGPT already learn more stuffs and are smarter. Let's see...


> Maybe ChatGPT already learn more stuffs and are smarter. Let's see...

LLMs don’t have a mechanism to learn from interaction, their models are simply fed more data and luckily you’ll get better results, but you might as well get worse results if said data isn’t well curated.


Maybe this will help https://chrome.google.com/webstore/detail/ai-just-some-if-st...

And you're not alone, I feel the same since ~2015


I feel the same way about the rapid proliferation of this latest generation of AI tech as I did when crypto began to take off: "this is interesting, but it will not deliver on the promises or live up to the hype, and there will be a lot of grifters and bad actors who will give the tech a bad name".


It seems to part of a trend to prompt us and in some ways mold us into giving responses, under the probably sincere guise of convenience, like the famous Word paper clip. I think some find that useful, but it does tend to take away agency. This may be the slide to the AI's taking over :-)


Well most products today need to be adaptive and handle complex states, even if there's a few if sentences in there handling something in an intelligent way that's AI, it's a broad term after all. The problem is that it's also become a marketing buzzword.


There's much more than LLMs -- RL, Robotics and graphs. AI hasn't reached mainstream gamedev yet, and general adoption among devs is low. I find it hard to convince my fellow devs from old time (web, startups) to invest their time to learn AI.


All things built on top of ChatGPT, "seems to me" as bullshit which simply are created to generate click bait, or with no future whatsoever. The next AI big thing will be ChatGPT 5 or a competing model with less memory requirements.


Before AI it was Crpyto. Before Crypto it was Quantum computing. Fusion pops up from time to time with "huge breakthroughs" that mean working products could be just 20 years away.

People love the optimism and the paranoia and uncertainty.


It’s the usual AI boom and bust cycles. The term is just too good to let go for marketers. It’s instantly evocative for the general public.

Just wait for it to underdeliver. Investors will get scared and we will be back to calling it machine learning.


AI is a hype but still I am jumping on this horse. It is the game everyone has agreed on playing and I think I can build and sell a compelling product. Yes it is herd behaviour but don't think you'll be safer on your own.


AI is the new blockchain. Fortunately for AI, it will have its use, it will just be more mundane than everyone thinks it will be, which is better than the crypto stuff.

Or who knows, may be there will be an application for block chains too.


Yes and it is sad because I see people trying to find problems for ChatGPT and say this is the future.. but the industries they are targeting have such a wide variety of concrete problems for a startup to solve.


It's important for the marketing machine to keep things going until Microsoft is able to transfer enough money from GOOG to MSFT and, if possible, Bing marketshare to be responsible for that move.


Yeah, this is the new gold rush, all the sharks and all the kids are joining the chase.

We've seen this pattern many times. And there is money to be made, for sure, but the value might not be there yet.


AI/ML in general seems good for situations when being wrong, indeed very wrong, occasionally is ok. So yes to AI driven recommendation engines, no to AI driven cars.


I haven't actually tried ChatGPT yet so it's just like the hundreds of other things I vaguely know of but don't really engage with. Not too bothersome.


Yes, so much this. This seems to be the same type of hype as blockchain was a couple years ago when everyone said that will solve all our problems.


The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe of a bit of software that chains words to another based on some statistical model.

The sad truth is that ChatGPT is about as good an AI as ELIZA was in 1966, it's just better (granted: much better) at hiding its total lack of actual human understanding. It's nothing more than an expensive parlor trick, IMHO.

Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

So, no, I don't have an AI fatigue, because we absolutely have no AI anywhere. But I have a massive bullshit and hype fatigue that is getting worse all the time.


I'm more fatigued by people denying the obvious that ChatGPT and similar models are revolutionary. People have been fantasizing about the dawn of AI for almost a century and none managed to predict the rampant denialism of the past few months.

I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.


There's a fellow that kinda predicted it in 1950 [0]:

> These arguments take the form, "I grant you that you can make machines do all the things you have mentioned but you will never be able to make one to do X."

> [...]

> The criticisms that we are considering here are often disguised forms of the argument from consciousness, Usually if one maintains that a machine can do one of these things, and describes the kind of method that the machine could use, one will not make much of an impression.

Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".

[0] https://www.espace-turing.fr/IMG/pdf/Computing_Machinery_and...


>Every time "learning machines" are able to do a new thing, there's a "wait, it is just mechanical, _real_ intelligence is the goalpost".

Just because people shift the goalposts doesn't mean that the new position of the goalposts isn't closer to being correct than the old position. You can criticise the people for being inconsistent or failing to anticipate certain developments, but that doesn't tell you anything about where the goalposts should be.


> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

It's important to note that this is your assumption which I believe to be wrong (for most people here).


> I suppose it makes sense though. Denial is the default response when we face threats to our identity and sense of self worth.

Respectfully, that reads as needlessly combative within the context. It sounds like the blockchain proponents who say that the only people who are against cryptocurrencies are the ones who are “bitter for having missed the boat”.¹

It is possible and perfectly reasonable to identify problems in ChatGPT and similar technologies without feeling threatened. Simple example: someone who is retired and monetarily well off, whose way of living and sense of self worth are in no way affected by developments in AI, can still be critical and express valid concerns when these models tell you that it’s safe to boil a baby² or give other confident but absurdly wrong answers to important questions.

¹ I’m not saying that’s your intention, but consider that type of rhetoric may be counterproductive if you’re trying to make another understand your point of view.

² I passed by that specific example on Mastodon but I’m not finding it now.


> ChatGPT and similar models are revolutionary

For _what purpose_, tho? It's a good party trick, but its tendency to be confidently wrong makes using it for anything important a bit fraught.


If you're the type of person that struggles to ramp up production of a knowledge product, but has great success in improving a knowledge product through an iterative review process, then these generative pre-trained transformers are fantastic tools in your toolbox.

That's about the only purpose I've found so far, but it seems a big one?


It seems to me that the tendency to be confidently wrong is entirely baked into intelligence of all kinds. In terms of actual philosophical rationality, human reasoning is also much closer to cargo cults than to cogito ergo sum, and I think we're better for it.

I cannot but think that this approach of "Strong Opinions, Weakly Held" is a much stronger path forward towards AGI than what we had before.


If you work at a computer, it will increase your productivity. Revolutionary is not the word I'd use, but finding use cases isn't hard.


I can buy that it's a better/worse search engine (better in that it's easier to formulate a query and you get the response right there without having to parse the results; worse in that there's a decent chance the response is nonsense, and it's very confident when it's being wrong about things).

I can't really imagine asking it a question about anything I cared about and not verifying via a second source, though, given its accuracy issues. This makes it feel a lot less useful.


How will it do that?

One of major problems of modern computer-based work is that there are too many people already in those roles, doing work that isn't needed. Case in point: the culling of tens of thousands of software engineers, people who would consider themselves to be doing 'bullshit jobs'.


But will it? After accounting for the time needed to fix all the bugs it introduces?


Humans introduce bugs too. ChatGPT is still new, so it probably makes more mistakes than a human at the moment, but it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard (and several other important regards).


> it's only a matter of time until someone creates the first language model that will measurably outperform humans in this regard

This seems to have been the rallying cry of AI-ish stuff for the past 30 years, tho. At a certain point you have to ask "but how much time"? Like, a lot of people were confidently predicting speech recognition as good as a human's from the 90s on, for instance. It's 2023, and the state of the art in speech recognition is a fair bit better than Dragon Dictate in the 90s, but you still wouldn't trust it for anything important.

That's not to say AI is useless, but historically there's been a strong tendency to say, of AI-ish things "it's 95% of the way there, how hard could the last 5% be?" The answer appears to be "quite hard, actually", based on the last few decades.

As this AI hype cycle ramps up, we're actually simultaneously in the down ramp of _another_ AI hype cycle; the 5% for self-driving cars is going _very slowly indeed_, and people seem to have largely accepted that, while still predicting that the 5% for generative language models will be easy. It's odd.

(Though, also, I'm not convinced that it _is_ just a case of making a better ChatGPT; you could argue that if you want correct results, a generative language model just isn't the way to go at all, and that the future of these things mostly lies in being more convincingly wrong...)


Anyone still remembers the self-driving hype?


>> it's only a matter of time

That reminds me how in my youth many were planning on vacations to Mars resorts and unlimited fusion energy) Stars looked so close, only a matter of time!


So, to you, ChatGPT is approaching AGI?


I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

Think about it.

What's the most expressive medium we have which is also absolutely inundated with data?

To broadly be able to predict human speech you need to broadly be able to predict the human mind. To broadly predict a human mind requires you build a model of it, and to have a model of a human mind? Welcome to general intelligence.

We won't realize we've created an AGI until someone makes a text model, starts throwing random problems at it, and discovers that it's able to solve them.


> I do believe if we are going to get AGI without some random revolutionary breakthrough, to achieve it iteratively, It's going to come through language models.

Language is way, way far removed from intelligence. This is well-known in cognitive psychology. You'll find plenty of examples of stroke victims who are still intelligent but have lost the ability to produce coherent sentences, and (though much rarer) examples of people who can produce clear, eloquent prose, yet are so learning and mentally challenged that they can't even tell the difference between fantasy and reality.


We don't judge AI by their ability to produce language, we judge them by their conference and ability to respond intelligently, to give us information we can use.


> To broadly be able to predict human speech you need to broadly be able to predict the human mind

This is a non sequitur. The human mind does a whole lot more than string words together. Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.


I think what the commenter is saying is that, in time, language models too will do a lot more than string words together. If it's large enough, and you train it well enough to respond to “what's the best next move in this chess position?” prompts with good moves, it will inevitably learn chess.


I don't think that follows, necessarily. Chess has an unfathomable amount of states. While the LLM might be able to play chess competently, I would not say it has learned chess unless it is able to judge the relative strength of various moves. From my understanding, an LLM will not judge future states of a chess game when responding to such a prompt. Without that ability, it's no different than someone receiving anal bead communications from Magnus Carlsen.


An LLM could theoretically create a model with which to understand chess and predict a next move, you just need to adjust the training data and train the model until that behavior appears.

The expressiveness of language lets this be true of almost everything.


Exactly. Since language is a compressed and transmittable result of our thought, to predict text as accurately as possible requires you do the same. A model with understanding of the human mind will outperform one without.

> Being able to predict which word would logically follow another does not require the ability to predict anything other than just that.

Why? Wouldn't you expect that technique to generally fail if it isn't intelligent enough to know what's happening in the sentence?


"The ability to speak does not make you intelligent." — Qui-Gon Jinn, The Phantom Menace.


Perhaps a more interesting question is "how much better do we understand what characteristics AGI will have due to ChatGPT?"

We don't really understand what intelligence means -- in humans or our creations -- but ChatGPT gives us a little more insight (just like ELIZA, and the psychological research behind it, did).

At the very least, ChatGPT helps us build increasingly better Turing tests.


Why the obsession with AGI? The point is that ChatGPT is already useful.


Is it? I see it mostly generates BS much faster.


Brothers Grimm would like a word with you about what "BS" means.

ChatGPT is good at making up stories.


Yes. It is obviously already weak AGI (obvious to anyone if they saw it 20 years ago).

It is also obvious that we are in the middle of a shift of some kind. Very hard to see from within, but clearly we will look back at 2022 as the beginning of something


The problem is that ChatGPT is about as useful as all the other dilettantes claiming to be polymaths. Shallow, unreliable knowledge on lots of things only gets you so far. Might be impressive at parties, but once there's real, hard work to do, these things fall apart.


Even if ChatGPT could only make us 10% better at solving the "easy" things but on a global scale, that is already a colossal benefit to society.


As much as I’m sick of AI products, I’m even more sick of the “ChatGPT is bullshit” argument.


It can be both bullshit and utterly astounding.

In terms of closing the gap between AI hype and useful general purpose AI tools, no one can reasonably deny that it's an absolute quantum leap.

It's just not a daily driver for technical experts yet.


> quantum leap

Ironically accurate.


In normal English usage, a quantum leap is a step-change, a near-discrete rather than continuous improvement, a large singular advance.

Given we are not talking about state changes in electrons, there is nothing wrong with this description of ChatGPT - it truly does feel like a massive advance to anyone who has even cursorily played with it.

For example, you can ask it questions like "Who was born first, Margaret Thatcher or George Bush?" and "Who was born first, Tony Blair or George Bush?" and in each instance it infers which George Bush you are talking about.

I honestly couldn't imagine something like this being this good only three years ago.


(1) You are correct in that placing both of those questions into Google doesn't quite get you anywhere near the answer that I imagine ChatGPT gives you (as you point out). Although, Google does "infer" which Bush you are talking about, there isn't a clear "this person is older" answer, you have to dive into the wiki pages basically to get the answer.

(2) Counter. I asked it the other day "how many movies were Tom Hanks and Meg Ryan in together" and the answer ChatGPT gave was 2 ... not only is that wrong it is astonishingly wrong (IMO). You could be forgiven for forgetting Ithaca from 2015. I could forgive ChatGPT for forgetting that one. But You've Got Mail? That's a very odd omission. So much so I'm genuinely curious how it could possible get the answer wrong in that way. And for the record, Google presents the correct answer (4) in a cut out segment right at the top, a result and presentation very close to what one would expect from ChatGPT.

I don't know about other use cases like generating stories (or tangentially art of any kind) for inspiration, etc. But as a search engine things like ChatGPT NEED to have attributions. If I ask the question "Does a submarine appear in the movie Battlefield Earth?" it will confidently answer "no". I _think_ that answer is right, but I'm not really all that confident it is right. It needs to present the reasons it thinks that is right. Something like "No. I believe this because (1) the keyword submarine doesn't appear in the IMDb keywords (<source>), (2) the word submarine doesn't appear in the wikipedia plot synopsis (<source>), (3) the film takes place in Denver (<source>) which is landlocked making it unlikely a submarine would be found in that location during the course of the film."

The Tom Hanks / Meg Ryan question/answer would at least more interesting if it explained how it managed to be so uniquely incorrect. That question will haunt me though ... there's some rule about this right? Asking about something you have above average knowledge in and watching someone confidently answer it incorrectly. How am I supposed to ever trust ChatGPT again about movie queries?


The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.


Because they are all ill defined in the manner they are used in common language. Hell, we have trouble describing what they are, especially in a scientific fact based setting.

Before this point in history we accepted 'I am that I am' because there wasn't any challenger to the title. Now that we are putting this to question we realize our definitions may not work well.


>The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.

Well, I'm no fan of chatGPT. But it appears most people are worse than chatGPT, because just regurgitate what they hear with no thought or contemplation. So you can't really blame average folks who struggle with the concepts of intelligence/understanding that you mention.


Which should be no surprise, as people have been grappling with these ideas for centuries, and we still don't have any definitive idea of what consciousness/sentience truly is. What I find interesting is that at one point the Turing test seemed to be the gold standard for intelligence, but chatGPT could pass that with flying colors. So how exactly will we know if/when true intelligence does emerge?


Well, my point wasn’t that there is a good definition of consciousness.

My point was that “consciousness” and “intelligence” are very different things. One does not imply the other.

Consciousness is about self reflection. Intelligence is about insight and/or problem solving. The two are often correlated, especially in animals, especially in humans, but they’re not the same thing at all.

“Is chatgpt consciousness” is a totally different question than “is chatgpt intelligent”.

We will know chatgpt is intelligent when it passes our tests of intelligence, which are imperfect but at least directionally correct.

I have no idea if/when we we know whether chatgpt is conscious, because we don’t really have good definitions of consciousness, let along tests, as you note.


The most annoying thing to me is people thinking AI wants things and gets happy and sad. It doesn't have a mamailian or reptilian brain. It just holds a mirror up to humanity generally via matrix math and probability.


Well said. It is a mistake to anthropomorphize large language models; they really hate that.


The only problem with the “ChatGPT is bullshit” argument is that it is only half true.

ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.

When provided with an analytic prompt, it is reliably a translator.

Terms, etc: https://www.williamcotton.com/articles/chatgpt-and-the-analy...


> ChatGPT, when provided with a synthetic prompt is reliably a synthesizer, or to use the loaded term, a bullshiter.

sounds like most people tbf


There are people who in many situations use as much critical though as ChatGPT does.

ChatGPT isn't as good as a human who puts in a lot of effort, but in many jobs it can easily outperform humans who don't care very much.


I like this take. It has many clear applications already and LLM's are still only in their infancy. I both criticize and use ChatGPT at work. It has flaws and it has advantages. That it's bullshit or "ELIZA" is a short-sighted view that overvalues the importance of AGI and misses what we're already getting.

But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.


How are you using it at work?


I've used it to proof emails for grammar, and it's done ok.

I'll also throw random programming questions into it, and it's been hit and miss. SO is probably still faster, and I like seeing the discussion. The problem with chatGPT right now is it gives an answer like it's certainty when it's often wrong.

I can see the benefits of this interaction model (basically summarizing all the things from a search into what feels like a person talking back), but I don't see change the world level hype at the moment.

I also wonder if LLMs will get worse over time through propagation error as content is generated by other LLMs.


I’m not the person you replied to but I’ve been using OpenAI’s API a lot for work. Some examples:

- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends

- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.

- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.


I can say with a certain degree of confidence that you haven't actually used CoPilot daily.


I've worked with teams that used Copilot. They claim it's great "Hey, now I don't have to actually spend any time writing all this boilerplate!" while for me, the person who has to review their code before releasing stuff, easier ways of writing boilerplate is not a positive, it's a negative.

If writing boilerplate becomes effortless, then you'll write more of it, instead of feeling the pain of writing it and then trying to reduce it, because you don't want to spend time writing it.

And since Copilot was accepted as a way to help the developers on the teams, the increase of boilerplate have been immersive.

I'm borderline pissed, but mostly at our own development processes, not at Copilot per se. But damn if I didn't wish it existed somehow, although it was inevitable it would at one point.


I feel ya. If your job is to kick back bad code, and now there is a tool that generates bad code, how does this not make your job more important?

Why not get some of the freed up, Copilot augmented developer labor budget moved to testing and do more there or build more tools to make your personal, boilerplate, repetitive tasks more efficient?

If the coders are truly just dumping bad code your way, that's an externality and the cost should be called out.


I use github copilot on a daily basis and it improves my time from thinking to code.

Often I have times where I'm think about a specific piece of code that I need and I have it partially in my head and github copilot "just completes" it. I press tab and that's it.

I'm not talking about writing entire functions where you have to mentally strain yourself to understand what it wrote.

But I've never seen any autocompleter do it so good then github copilot. Even for documentation purposes like JSdoc and related commenting system it's amazing.

It's a tool I pay for now since it's proven to be a tool that increases my productivity.

Is it gonna replace us? I hope not, but it does look promising as one of those tools people will talk about in the future.


It would be helpful if people could include in their assessment roughly how much time they've personally spent using these tools.

Helping write boilerplate is to Copilot what cropping is to Photoshop.

Some of the ways I've found Copilot a powerful tool in my toolbox: Writing missing comments (especially unfamiliar code bases), "translating" parts of unfamiliar code to a more familiar language, suggesting ideas for how to implement a feature (!) in comments.


>the increase of boilerplate have been immersive

Has it really? Or are you worried that this is something that will happen?

Of course I don't know how other people use it but I find that it's very much like having a fairly skilled pair programmer on board. I still need to do a lot of work but I get genuine help. I don't find that I personally write more boilerplate code than before, every programming principle applies as it always has.


I wrote it in past tense, it's based on actual situations :) If you don't believe what I write, I guess it doesn't matter what I write now. Regardless.

One simple example that I've had to reject more than once.

- Function 1 does something

- Developer needs something like Function 1 but minor change

- Developer starts typing name of function which has a similar name to Function 1, but again, minor difference

- Copilot helpfully suggests copy-pasting Function 1 but with the small change incorporated

- Developer accepts it, commits and sends the patch my way

Rather than extracting the common behavior into it's own function and call that from both of them, refactors which Copilot doesn't suggest, the developers is fine with just copy-pasting the function.

Now we have to maintain two full slightly different functions, rather than 1 full functions + 2 minor ones.

Obviously a small example, and it wouldn't be worth extracting it the first time it happens or on a smaller scale. But once you have entire teams doing something like this, it becomes a bit harder to justify copy-paste approach, especially when you want the codebase not to evolve to complete spaghetti.

And finally, I'm not blaming the tool, it's not Copilots fault. But it does seem to have made developers who rely on it think less, compared to the ones that don't.


you could stop doing code reviews and do something else


I haven't. Now you know for a fact :)

What I have seen about it ranged from things that can be nearly just as well handled by your $EDITOR's snippet functionality to things where my argument kicked in - I have to verify this generated code does what I want, ergo I have to read and understand something not written by me. Paired with the at least somewhat legally and ethical questionable source of the training data, this is not for me.


So stop evangelizing about stuff you haven’t used. Understanding code is easier than writing it from the scratch. That’s why code review doesn’t take as much time as writing code and you still need to prove your code works, even if you wrote it yourself.


Understanding code is only easier for simple tasks. I've definitely had copilot spit out complex algorithms that looked right at first glance but actually had major issues that required me to write it from scratch.


Thats why you test, this could also happen with code you wrote so it’s not an argument against copilot. Did you wrote your “complex algorithm” and then run and debugged it in your head? No, you’ve tested it. Do the same with Copilots code


If testing is the equalizer, there is no difference between black box code and something you fully understand. Which fair enough, is how ML works in general.


I contend understanding the semantics of code is harder than writing the syntax. Reading the syntax without thinking deeply (to the level needed to write it, or deeper) seldom helps you realize unexpected corner cases. This is why stochastic testing is so valuable.

ad.: Code review takes less time than writing code for the same reason reading a book takes less time than writing one. Distillation and organization of ideas requires expertise gained through experience and long thought. Reading a book requires reading ability.

Understanding a book (and the intricacies underlying it) takes effort on the order of the original writing, but most people don't seek that level of understanding. The same is true of code.


I am not evangelizing, I am just stating why this is not for me and my way to write software.


Code review often takes me longer than writing code. More generally, reading other people's code is more difficult for me than writing (or reading) my own.


I've used it quite a lot and I agree with the original post. It seemed really useful at first but then it started introducing several bugs in large blocks of code. I've stopped using it in the end since the small snippets on the one line size is trivial enough to write myself (with just vim proficiency) and the larger blocks on the order of a function autocomplete is too bug prone (and kills too much willpower budget to fix).


Yep. I’m personally skeptical of so many other use cases for LLMs but CoPilot is fantastic and basically just autocomplete on rocket fuel. If you can use autocomplete, you can use CoPilot super effectively.


I almost always turn autocomplete off except in circumstances where the API has bad documentation. I also found that copilot was an aggravation more than a help after using it for a couple weeks.


We programmers enjoy writing code. We derive satisfaction when a code is perfect and elegant. But its going to end very soon. Artists are freaking out because things that takes them days to create now only take 2 seconds. We are next.

The writing is on the wall. Programming as we know it is going to end. We should be embracing these tools and should start moving from software developers to software architects role.


I don't think you're getting what I'm saying. I'm _faster_ without autocomplete


This is such a bullshit answer. No, I don't use it daily because I tried it for a couple hours and it suggested nothing useful and several harmful. Why would I keep using it?


[flagged]


I've used it and it did nothing helpful. I also find autocomplete slows me down. The code it suggested always needed enough reworking I would have been faster writing it out from scratch. It's just not that helpful for me. Maybe if I didn't know the apis that well but I suspect even then it would be as much a liability as a benefit


90% of the time CoPilot-bros stop at this point without giving any good examples of how this post-autocomplete monster helped them. Autocomplete works quite well in most use cases - it is low effort, free and most importantly, ethical. CoPilot on the other hand jumps through so many hoops to generate something marginally and arguably better, but at the cost of what? This is exactly like the Search Vs ChatGPT problem - do you want a deterministic, algorithmic, fine tuned experience or some random probabilistic, overconfident crap.


I don't find basic autocomplete useless, although I probably use it less than most people. (For example, I rarely write Java and when I do I don't use long names or deeply nested structures and I don't implement equals etc. by default.) I think people use autocomplete in two ways; when I use it I know what I want to be in the code and it's a way to type 30 characters by pressing 3 keys or whatever. But I also see a lot of people use it like "I don't know what to do next, what methods are available?" And this is usually to the detriment of the code quality.

Copilot is not like autocomplete. It only works in the second mode, because any nontrivial code it generates needs to be read, considered, and understood. (And any trivial code it generates can be done by autocomplete or long-existing non-AI tools.) This is especially true given LLMs' hallucinatory behavior - by definition it will often spit out something that "looks right" even if it's absolutely not - and such code is harder to review than code that looks obviously wrong.

So if you do use autocomplete in the second mode, maybe you find Copilot a super-powered version of that. And if you have the same weaknesses as Copilot, reviewing its code after it's done writing it is probably not any different than reviewing your own code after writing it, so for you it takes the same amount of time. For me, that's not the case.

When I used it, and when I see others use it, Copilot is like an impossibly overenthusiastic junior developer I will never be able to teach better habits to.


> They replace the default autocomplete in a way which largely unnoticeable, but surprisingly effective at complex autocomplete tasks.

I've yet to see it. It's barely above IDEAs autocomplete in the rare cases when it manages to trigger on my code, and it has already been wrong more than once in the few times it did deign to provide autocompletion.


Hey, maybe don't outright call people liars about their own lived experiences just because they don't agree with yours...?


I can say with a higher degree of confidence that you haven't actually used CoPilot daily for any respectably sized project.


oops, you are wrong :)


> The "I" in AI is just complete bullshit and I can't understand why so many people are in a awe

I agree.

And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".

As you say, first(ish) there was ELIZA. Than this that and everything else. Then Autonomy and all that dot-com era jazz. Now with compute becoming more powerful and more compact, any man and his dog can stuff some AI bullshit where it doesn't belong.

I have seen comments below on this thread where people talk about "well, it's closing the gap". The thing you have to understand is that the gap will always exist. Ultimately you will always be asking a computer to do something. And computers are dumb. They are and will always be beholden to the humans that program them and the information that you feed them. The human will always have the upper hand at any tasks that require actual intelligence (i.e. thoughtful reasoning, adapting to rapidly changing events etc.).


> And the worst thing is that the bullshit hype comes round every decade or so, and people run around like headless chickens insisting that "this time its different", and "this time its the REAL THING".

This. To answer the OPs question, this is what I'm fatigued about.

I'm glad we're making progress. It's a hell of a parlor trick. But the hype around it is astounding considering how often it's answers are completely wrong. People think computers are magic boxes, and so we must be just a few lever pulls away from making it correct all the time.

Or maybe my problem is that I've overestimated the average human's intelligence. If you can't tell ChatGPT apart from a good con-man, can we consider the Turing test passed? It's likely time for a redefinition of the Turing test.

Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?

I'm reminded of the old Rod Serling quote: We're developing a new citizenry. One that will be very selective about cereals and automobiles, but won't be able to think.


I'm having a really hard time following your argument. But absolutely agree we need to redefine the Turing test. Only problem is that I can no longer come up with any reasonable time-limited cognitive task that next year's AI would fail at, but a "typical human" would pass.


"Intelligence" is probably too nebulous a term for what it is we're trying to build. Like "pornography", its hard to rigidly define, but you know it when you see it.

I think "human level intelligence" is an emergent phenomenon arising from a variety of smaller cognitive subsystems working together to solve a problem. It does seem that ChatGPT and similar models have at least partially automated one of the subsystems in this model. Still, it can't reason, doesn't know it's wrong, and can't lie because it doesn't understand what a lie is. So it has a long way to go. But it's still real progress in the sense that it's allowing us to better see the dividing lines between the subsystems that make up general intelligence.

I think that we'll need to build a better systems level model of what general intelligence is and the pieces it's built out of. With a better defined model, we can come up with better tests for each subsystem. These tests will replace the Turing test.


>>Instead of AI making machines smarter, it seems that computers are making humans dumber. Perhaps the AI revolution is about dropping the level of average human intelligence to match the level of a computer. A mental race to the bottom?

I came here to make this comment. Thank you for doing it for me.

I remember feeling shocked when this article appeared in the Atlantic in 2008, "Is Google Making Us Stupid?": https://www.theatlantic.com/magazine/archive/2008/07/is-goog...

The existence of the article broke Betteridge's law for me. The fact that this phenomenon it is not more widely discussed describes the limit of human intelligence. Which brings me back around to the other side... perhaps we were never as intelligent as we suspected?


> perhaps we were never as intelligent as we suspected?

Yeah, I think you're right. Intelligence is just something our species has evolved as a strategy for survival. It isn't about intelligence, it's about survival.

The cognitive skills needed to survive/navigate/thrive in the digital era are very different than the cognitive skills required to survive in the pre-digital era.

We're biologically programmed through millions of years of evolution to survive in a world of scarcity. Intelligence used to be about tying together small bits of scarce information to find larger patterns so that we can better predict outcomes.

Those skills are being rendered more and more irrelevant in a world of information abundance. Perhaps the "best fit" humans of the future are those that possess new form of "intelligence", relying less on reason and more on the ability to quickly digest the firehose of data thrown at them 24-7.

If so, then the AI we were trying to build in the 1950s would necessarily be different than the AI that our grandchilden would find helpful.


You're dead on. Isn't it wild that despite our seemingly impressive intelligence, such insights never seem to rise to the level of... second nature.

I forgot to add something to my original post. >>"I remember feeling shocked when this article appeared in the Atlantic in 2008..."

At the time I was shocked that the question was even being asked!


This is not always true, see Chess.


AlphaGo as well. A few years back people were saying AI could never come close to beating a human at Go.


Man, if this were 1800 you'd be stating that man would never fly and the horse would never be supplanted by the engine. I honestly don't believe you have any scientific or rational reasoning for the point you are attempting to make in your post, because if you were you'd be stating that animal intelligence is magical.


> Man, if this were 1800 you'd be stating that man would never fly and the horse would never be supplanted by the engine.

I'm sorry, what sort of bullshit argument is that ?

Flight and engines are both natural evolution using natural physics and mechanics.

Artificial Intelligence is nothing but a square-peg-round-hole, when you have a sledgehammer everything looks like a nut scenario.


They are natural to you maybe with hindsight? Powered flight was most definitely not considered natural at the time. In fact, most attempts at flight were trying to mimic birds at first.


Flight and engines are natural evolution but intelligence is magic? Nature accomplished intelligence via random walk and it is a complicated mess because of it. To think that we cannot accomplish at least parts of intelligence is insane to me.


“AI” isn’t bull shit, it’s correctly labeled. It’s intelligence which is artificial: i.e. fake, ersatz, specious, not genuine… It’s our fault for not just reading the label. (I absolutely agree with your post and your viewpoint, just to be clear!)


Artifical means "not human" in this context for me, but I understand "Intelligence" as the abiltiy to actual reason about something based on things you learned and/or experienced, and these "AI" tools don't do this at all.

But defining "intelligence" is a philosopical question that doesn't necessarily have one answer for everything and everyone.


Personally, I try to take a more inductive approach. We don’t know what intelligence is, but we assume it’s something we exhibit. We also clearly recognize other animals as possessing the same trait to varying degrees. Since we don’t know what it is, and since (I would argue) we can only convincingly claim that exists in other biological organisms without meeting a high burden of proof, to claim that it exists in an inorganic substrate requires a VERY large burden of proof to be met, similar to what would be met if you were claiming that magic existed. In my view, calling computers “intelligent” is in the same league as claiming that crystals are magic. Of course, this depends on my own philosophical interpretation of what intelligence is, as you say.


Intelligence is a capability not a mechanism, and therefore if you're able/willing to define what that capability is, there should be no problem measuring/gauging the intelligence of any system, biological or not. You don't need to look inside the black box - you only need to test if the black box has this capability.

Intelligence may be a fuzzily defined word in everyday usage, but I don't think it's the mystery you present it to be. Joe public may argue against any and all definitions of the word that they personally disagree with (maybe just dislike), but it's nonetheless quite easy to come up with a straightforward and reductive definition if you actually want to!


You and I are clearly referring to two different things when we use the word “intelligence”. It is also not nearly so easy to come up with a simple/mechanical/verifiable definition for the thing that you’re referring to. Unless you have a good definition—in which case, you might want to put your money where your mouth is and get busy revolutionizing multiple fields of human inquiry!

It’s also plain that many people are very interested in looking inside the black box and think the contents of the black box are relevant and important. This fact doesn’t change just by your saying so.


People are interested in looking inside the black box (our brain) for sure, partly for inspiration as to how to implement intelligence among other things, but implementing isn't the same as defining.

Being able to define what you want to achieve isn't generally the same as knowing HOW to achieve it (except in this case the definition of intelligence rather does suggest the right path).


The intention of the "artificial" in "AI" is not that particular meaning of "artificial", but the one for "constructed, man-made"—see meaning #1 in the Wiktionary definition[0]; the one you are using is #2.

It is often frustrating that English has words with such different (but clearly related) definitions, as it can make it far too easy to end up talking past each other.

[0] https://en.wiktionary.org/wiki/artificial


Uh oh, are you an AI? You seem to have missed my attempt at wry humor.



"Artificial" is not synonymous with "fake". "Fake" implies a level of deception.


Actually it is [1] [2]

[1] Synonyms of artificial has "faked" : https://www.thesaurus.com/browse/artificial

[2] Synonyms of fake has "artificial": https://www.thesaurus.com/browse/fake


Not necessarily true. People talk about “fake meat” all the time but it’s clear there’s no level of fraudulence implied by this usage. It’s meant in the sense of “artificial meat”. There are multiple ways the word “fake” is used, and one is as a synonym for “artificial”.

However, in this case, it does seem that there is a level of fraudulence and deception. Given that “fake” often is used exactly the way you say, maybe “fake intelligence” would indeed be a more appropriate term.


Have you been around people who say "fake meat?" Every time I've heard it, it was said derisively and implied fraudulent meat.


Yes. I’ve heard loads of people refer to it as fake without an implied pejorative meaning.


The people I know that like it use artificial and the people that won't try it use fake.


Fascinating. Not all people are the same. Who’d have thought?


Definitely. Quirks like this are also what makes AI difficult.


Fake meat is still a form of deception; it's something that's not meat pretending to be meat. If lab grown meat gets good enough to be indistinguishable from "real" meat, then it would no longer be fake, it would just be artificial.


Is “fake grass” (i.e. astroturf) a form of deception in your eyes?


When seen at a distance, it can trick people into thinking you have a perfectly manicured lawn. But you're just lazy/evil/it's a holiday house. Deception!


Haha. I give up. :-)


Superficial Intelligence may hit the mark there.


I agree with you completely. I work in the field and I think your sentiment is way more common amongst people who know about the technology, vs the fair weather fans who have all jumped on the hype bandwagon recently. I actually posted the same thing (that it's no different than Eliza) a month or so ago, and got at least one hilarious dismissal, like the "I bet you make widgets" person that replied to you.


If you believe that ChatGPT is similar to Eliza, then I can guarantee that you have no rigorous no-wriggle-room definition of what intelligence is. Maybe you think you understand it, or have defined it, but I'm 100% certain any such definition is not 100% reductive and instead relies on other ill-defined works like "reasoning" etc etc.


Thank you!


“It’s just statistics” is an evergreen way to dismiss AI. The problem is you’re also just statistics.


Source for consciousness / intelligence to be "statistics"?

I don't think there is any because there is no functional model for what organic intelligence is or how it operates. There are plethora of fascinating attempts / models but only a subset implore that it is solely "statistical". And even if it was statistical, the implementation of the wet system is absolutely not like a gigantic list of vectorized (stripped of their essence) tokens


That's like saying that airplanes aren't flying since they're not flapping their wings. Intelligence is a capability - not a specific mechanism.

Consciousness is a subjective experience (regardless of what you believe/understand to be responsible for that experience), so discussing "consciousness/intelligence" is rather like discussing "cabbages/automobiles".


Sources for intelligence to be magic? I mean we know it's complicated but intelligence also spans the smallest creatures on the planet to humans. This points at intelligence being a reduceable problem that is layered. On top of that it's unlikely we need to model nerve behavior to get something intelligence like output.


Look at how Microsoft is instructing GPT to become "Sydney" and re-evaluate your opinions about what is intelligence:

https://twitter.com/marvinvonhagen/status/162365814434901197...


There’s a man who claims to have solved consciousness as a multilayered Bayesian prediction system.

See Scott Alexander for attempts to explain what is apparently impenetrable papers on it.


Shh. The models don’t like hearing that.


> Github CoPilot? Great, now I have to perform the most mentally taxing part of developing software, namely understanding other people's code (or my own from 6 months ago...) while writing new code. I'm beyond thrilled ...

I think there's an argument to be made that AI is being used here to help you tackle the more trivial tasks so you have more time to focus on the more important, and challenging tasks. Albeit I recognise GitHub CoPilot is legally questionable.

But yes, I agree with your overall point that AI has still not been able to 'think' like a human but rather can only still pretend to think like a human, and history has shown that users are often fooled by this.


I think the parent’s comment is probably referring to the fact if you use Copilot to write code then you have to go through and try to understand what it wrote and possibly debug it. And you don’t have the opportunity to ask it why it wrote it the way it did when reviewing its code.


I think you’re right, but that just means parent doesn’t understand copilot and is off tilting at windmills.

Copilot is amazing for reducing the tedium of typing obvious but lengthy code (and strings!). And it’s inline and passive; it’s not like you go edit -> insert -> copilot function and it dumps in 100 lines of code you have to debug. Which is what it sounds like parent is mistaking it for.

I’m reminded of 1995, when an elderly relative told me everything wrong with the internet based on TV news and not having ever actually seen the internet.


> Copilot is amazing for reducing the tedium of typing obvious but lengthy code (and strings!)

Which it occasionally mistypes. Then you're off to chase a small piece of error in a tub of boilerplate. Great stuff! For actual example, see [0]

[0] https://blog.ploeh.dk/2022/12/05/github-copilot-preliminary-...


You must be a much better programmer than I if those are examples you’d use copilot for. I was thinking more like:

   start_value = get_*start_value(user_input)*
   self.log.d*ebug(‘got start_value {start_value}’)*
. . . where the would-be italics are what copilot would likely suggest for completion.

And if it’s wrong, you just. . . keep typing. It’s autocomplete, just like IDEs have for other things. I’m kind of astounded that people have such an emotional reaction to an optional, low-key, passive, easily-ignored tool that sometimes saves a bunch of typing. Yes, if you always accept the suggestions you’ll have problems. Just like literally every other coding assistance tool.


That's not my blog, I just thought the example to be relevant.

> I was thinking more like:

That example is straight up from any of those "programming is not bound by typing speed" essays of yore.

> people have such an emotional reaction to an optional, low-key, passive, easily-ignored tool that sometimes saves a bunch of typing.

Maybe because it's not generally advertised by proponents as "an optional, low-key, passive, easily-ignored tool that sometimes saves a bunch of typing"? Just look at the rest of the thread, it's pronounced as a game-changer in productivity.


Different experiences, I guess. I’m a low end, part-time hobbyist programmer, and for me at least 75% of my time is spent essentially typing in obvious, easily-checked code. It has been a game changer for me. It’s also led me to write better comments, because rather than being a pure tax, they improve the generated code.

I can see how someone who’s always working on sophisticated, mentally challenging code would get less benefit and would see more frequent errors.


But it's trickling in small chunks at a time unless you are just smashing tab repeatedly and don't look at what it did until the very end. You can also not accept what it offers and just continue writing code for yourself. If a dev submitted a bunch of Copilot code they don't understand and can't answer questions about you reject the PR outright and they eventually realize it didn't save them any time or effort. Copilot isn't the employee.


As soon as I open a fresh IDE these days I immediately miss CoPilot and it's the first thing I install.

Hype or not, it's incredibly useful and has increased my productivity by at least 20%. Worth every penny.


I agree. I didn't understand the big deal that it passed a google interview either. IMO, that said more about the uselessness of the interview than the 'AI'.

Co-pilot has been semi-useful. It's faster than search SO, but like you said, I still have to review all the code and it's often wrong in subtle ways.


This is the meat of the issue - ChatGPT is exposing certain things a susceptible to bullshit attacks; humans have just been relatively bad at those.

It will turn out to be a useful tool for those who know what they’re asking about so they can check the answer quickly; but it will be USED by tons of people who don’t have a way of verifying the answers given.


ChatGPT is of actual help for me in various daily tasks, which was never the case with ELIZA or earlier chatbots which were only good as a curiosity or to have some fun.

Lack of actual human understanding? Of course, by definition a machine will always lack human understanding. Why does that matter so much if it's a helpful tool?

For what it's worth, I do agree that there is a lot of hype. But contrary to blockchain, NFTs, web3, etc., this is actually useful for many people in many everyday use cases.

I see it as more similar to the dot com hype - buying a domain and creating a silly generic website didn't really multiply the value of your company as some people thought in that era, but that doesn't mean that websites weren't a useful technology with staying power, as time has shown.


I'm sorry I don't want it to get much smarter.

It you ask it to go through and comment code it does a pretty good job of that.

some things better than others(not that great at CSS)

need a basic definition of something. got it.

tell it to write a function it's not bad.

As a BA just tell it what your trying to do and what questions it should ask users. It will get some good ideas for you.

Want it to be a PM have create a loop asking every 10 minutes if your done yet.

Is it a senior engineer? no. can it pass a senior engineering interview? quite possibly.

debug code hit or miss.

I think the big thing it's not that great at front end code. It can't see so that probably makes sense. a fine-tuned version of clip that interacted with a browser would probably be pretty scary.


What's the point of letting it comment code? The programmer who reads the code can run it as well.


I don't really think of ChatGPT as AI at this point, just an incredibly useful tool.


I wonder if we will look back at this comment (and others like it) as similar to the infamous “takedown” of Dropbox when it was first posted on HN.

Time will tell, I certainly can’t predict.


[flagged]


We've banned this account for repeatedly breaking the site guidelines and ignoring our request to stop.

If you don't want to be banned, you're welcome to email hn@ycombinator.com and give us reason to believe that you'll follow the rules in the future. They're here: https://news.ycombinator.com/newsguidelines.html.


The "I" in AI is just complete bullshit

We're about six minutes away from "AI bros" becoming a thing.

The same kind of grifters who always latch onto the latest thing and hype it up in order to make a quick buck are already knocking on AI's door.

See also: Cryptocurrency, and Beanie Babies.


No. I’ve ignored the whole thing so far.

I just use the features in the iphone where some photos get enhanced or i can detect and copy text from images.

So far it’s going very well.


No, it is just that you are browsing the news that only follow the trends I think. Follow more independent thinkers and things would get better.


I kinda like how the tech bubble feels when "everyone" is excited about the same thing... (be it Lisp, a text editor or AI)


Not really. Deep learning had continued to deliver since 2012. You can't say that about crypto. Or any other new tech.


No, I absolutely love it and can't get enough of it. I'm really happy that AI tools are going mainstream.


I told ChatGPT to implement itself. It's a WOW moment for me. It's like i was reborn for next century.


As tired as of Cryptocoins/Nfts/whatever... but it keeps popping to the top of HN anyway


I've been using the hide button here for weeks because there are so many ai/gpt posts... so, yes.


There is no AI. At best it's diffusive sequence generation, and at worst it's just noise.


The analogue version was academic institutions that paid gifted people with intelligence to have research assistants that went to the library. But we defunded that model of natural intelligence because it wasn't equitable for the ungifted.


As an operation, it's a big mechanical turk that is sped up with huge amounts of server spend. The utility of any output is a lossy derivative of curated knowledge and IP, trained and censored by some of the poorest people on the planet.



This is a sign that you should spend less time on your computer and go out a bit more.


No, just AI fatigue fatigue.


still way below js/py fatigue or internet fatigue; actually i find the recent GPT a bit different in their impact (even though I'm not thinking it's the fatigue)

bring me npmGPT


Sriracha is great, but we don't need Sriracha Cheerios.


Sick of hype? Yes. Excited about the future of AI? Also yes.


Singularity, my man. You’re tired. But theme world isn’t.


I work in the field, so: yes, since about 2015, heh.


There’s some very good stuff going on but no question the hype cycle is currently shifting. Crypto is dead, AI is the new crypto.

With that, all the hype-sters and shady folks rush in and it can quickly become hard to differentiate between what’s good, what’s misplaced enthusiasm, and what’s just a scam.

These scenarios are also a big case study in the Dunning-Kruger effect. I’ve already got folks that haven’t written a line of code in their life trying to “explain” to me why I’m not understanding why some random junk AI thing that’s not really AI is the next big thing. Sometimes you gotta just sit there and be like “thanks for your perspective”.


That’s Capitalism. AI is the new growth frontier, so it’s all you are hearing about. Seems like LLMs and generators are genuine innovations. But don’t lose sight that these innovations are driven by the Capitalist need to concentrate more surplus value into fewer hands. This is no different than programable looms, etc. of the past, except now they will try to automate immaterial/“intellectual” work. It remains to be seen if these technologies will succeed at that, but the Capitalists are compelled to try, and we will be forced to live with the wreckage.


after 10+ years of stagnation or increments in general tech, this feels really novel

ofc HN over-analysing is killing the fun


You're absolutely right. And if this is only your second ride on the hype cycle, they come around often. Gartner publishes a list of them.

Try to focus on the bright side - now that you've seen behind the curtain, you can more easily avoid the hacks and shysters. They will try to cast the "ML/AI" spell on you and it won't take.


AI is the NFT of 2023.


Like I said some days ago, I really wished that the hype would die or dwindle a bit. I'm working on my own AI side-projects, but the amount of BS and misinformation being put out everyday by new "AI experts" is fatiguing, yes.


Speaking about opportunity cost of folks pursuing AI like a mad crowd... I started a ChatGPT competitor https://text-generator.io let me know what you think .. or if it's too much...


We are not there yet. And crypto is not dead, too.


How is crypto not dead?


Well, I am old enough to remember the cryptowinters of 2015, 2018, and early 2020. Not the first, not the last. BTC/USD is still around 22,000, which looks very much not zero to me.


Total Cryptocurrency Market Cap is over a trillion dollars


the same way stock markets arent dead.


If crypto is dead, then I'm sure you wouldn't mind gifting me several bitcoin :D


You're too smart to be here ??? ;))


* not yet


Well, eventually either the governments will survive, or crypto will. Governments (as nation states) exist longer than crypto, but they have their share of problems as of lately.


Are you seriously trying to imply that Bitcoin has a greater chance of survival than the concept of a nation state? Is this a real opinion you're holding in your head?


The concept of a nation state exists for around 300 years, and currently in a crisis. Crypto exists for 15 years (also currently in a crisis, to be fair). I think both governments and crypto will survive somehow, but not in their current forms. There might be distributed states, corporate states, fiat states, confederations etc; the “nation” aspect of the nation-state will not survive too long in my opinion (i.e. some people who are alive now will see the end of nation-states). Crypto will be transformed too, perhaps becoming more utilitarian and less reliant on competitive adversarial selection.

This is all speculative, of course, but I have seen the fall of the Soviet system, and I am well aware that forms of government are not eternal.

tl;dr but yes. Crypto of the future will look more or less similar to the crypto of today. Governments of the future will look nothing like today’s nation-states.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: