That's a lot of words from someone who doesn't seem to have tried using these tools yet?
The Hype is smoke and mirrors, but there are tons of real use cases where things were MUCH easier than before. Things like text to speech or image recognition previously required a team of 10 ML engineers a couple of years to build now take a weekend.
It's smoke and mirrors to the "business world" because none of these translate into bottom line numbers though, and I think that's all this guy is able to see
When a person or company comes up with something that is substantive then they won't call it 'AI' because either they will want (I assume) to differentiate it from all the junk that is out there, or they won't need to hype it up by calling it AI.
OpenAI has that name because, from its inception, it was working towards AI. But when they actually released a product, they did not call it “AI”. They called it “GPT”.
It’s not on average being developed as a product with the affordances one expects from a product: save and restore state, seamlessly maintain and modify a working set of artifacts, reset possibly corrupt cached state.
I think everyone who saw ChatGPT and thought something like “there’s a good website for instruct transformers? they’re going to have a zillion users” was acknowledging a milestone: if a tech demo is enough of a capability increase? It’s a product in spite of limitations.
But it’s been a long time, and we still don’t have save/restore, let alone “go curl this”.
We’re maximizing something other than consumer utility.
Part of the core problem of our whole economic setup is the widening gap between "where money comes from" and "what do things do?". The VC hype machine makes this gap so large that it can disrupt attempts to design a specific functional requirement. It's way, way, waaaayyy more important (in the near term, anyway) to get the attention of Mr. Moneybags versus making a self-assembling AI wiring diagram. Especially when these systems are so resource hungry - that's a lot of money up front.
We see a flavor of this in the defense industry, where the users of the Product have a . . a whole ecosystem of fusspots[0] . . between them and the money. Say, for example, you get a gigantic requirements document. It might say "You will make the Product System but you must do the work according to Process X, Process Y, Process Z, and any related Process [X.1.b through X.6.z, Z.1 through Z.921, etc]". Then, when you show you have these Processes/Certificates, you get a pile of money. Thing is, it could be decades - or never - before the Product ever sees anything like a user. Every time someone refreshes the Giant Requirements Document, more money gets paid. The actual transaction taking place is Paperwork for Money - so therefore, the red tape is what's valued. Sometimes - almost by accident - a product comes out from this, but very often it doesn't[1], and the whole machine just keeps cranking on.
[0] I am being really nice here, but yeah, without trying to be nice . . there's a TON of corruption, and despite what the NAFO fanboys say, a LOT of procurement officers have their corner offices already picked out at LockBoNorthRay. It's an absolutely widespread practice - every company I've ever been with has BRAGGED about hiring on Procurement Officer X or Y from the latest bagged contract.
[1] Sometimes a Product comes out completely FUBARed[a] and then the techs and engineers have to figure out a way to make it fight. It would all be a hilarious TV show, but in real life all I can see are the dead airmen and sailors that'll be stuck inside when push comes to shove. Well, "shoving" someone other than five Arab teenagers playing with fireworks in an RV, you can fight those guys with a Garfield body pillow and some body odor..
[1.a] But within that ever-shifting requirements spec!
I think you've misconstrued the argument being made. The historical analogy is kind of interesting -- "smoke and mirrors" was a real technology with genuinely useful applications, but also attracted hucksters promising the impossible. Is it really that implausible that the supposedly world-ending powers of chat bots might have something in common?
I'm just kinda sad at how cynical so many of us have become. I mean, I see tons of comments "But it hallucinates!!!!!" Yeah, no shit. At the same time, you're having a perfectly grammatically correct, natural, sensical conversation with a computer. Just a couple years ago that was purely the realm of Star Trek.
I'm 58. It is hard not to be cynical when you've seen as many hype cycles as I have. Especially when you suspect the person hyping LLMs is the same person that was hyping blockchain and NFTs.
In my personal experience, cynicism seems more like a fixed personality trait people have than one that is rationally borne out of experience.
If you had been holding a representative basket of tech stocks right before the dot-com bust (a classic "hype cycle"), you would be beating the S&P 500 today if you just held on to them, even after most of those companies went bankrupt. Even on crypto, it is too early to call.
The fact that you bridge so easily from crypto to LLM suggests to me you're not really considering these things on the merits and are over-focused on the who and cultural alignment of the people behind various inventions, rather than the creations themselves. /2c
Yeah, I have a few "lottery tickets". I put a smallish amount of money into a wide variety of things that I don't actively manage. It includes crypto, tech stocks, hell, even magic the gathering cards from my childhood.
The plan is when the kids are college aged to see how it nets out and pay as much as I can from that.
I'm not exactly a spring chicken either, and I 100% agree on the insane, unwarranted hype of blockchain and crypto (plenty of examples in my comment history).
But the huge difference with, for example, blockchain, is that despite all the attempts by the hypesters to explain what revolutionary tech it was, I was always left thinking one of "Umm, OK, but what do I actually do with it" or "How is this better than just a normal database?" or "Actually, that's not possible." Blockchain was nearly always a cool tech looking for a use case beyond cryptocurrencies.
But with LLMs, I never felt that way. I was immediately blown away by how cool ChatGPT was, and how much real value I got out of it. Sure, I think there is plenty of overhyping (and I think it's a little funny how companies are trying to add that little star icon everywhere, even when it's just annoying), but even if I ignore what everyone says about it and just use the tech personally, by myself, I'm still left thinking "Damn this is just magic" many times a day.
I am in my 40s myself, but it's difficult for me to not separate the very little utility we actually were seeing from NFTs with the utility I get from AI today. It's things people have in production, with millions of users, which are getting actual value out of it.
If a very unreliable person told me it was raining I'd check myself, instead of just assuming it was sunny because they are prone to lying.
Given what we have working today, it's a matter of how transformative it all is, not whether it will be useful at all: More like the dot com boom than blockchains
The only difference between AI hallucination and a regular person misremembering is that sometimes the person will communicate the fact that they're unsure. It's not like the outcome is any different either - if you blindly trust, you've got the same problem.
> The only difference between AI hallucination and a regular person misremembering is that sometimes the person will communicate the fact that they're unsure. It's not like the outcome is any different either - if you blindly trust, you've got the same problem.
There's a few key differences. One being, if we compose a system with the output of "regular person", and "regular person" makes a mistake, they can be held accountable.
True - it is amazing. Shows how fast we get used to things.
If someone created an anti-gravity suit, then in a couple of years we'd just be whining about them - how uncomfortable they are, how inconvenient to be banging into people all the time ...
How defensible do you think this distinction actually is? Like if you were fighting a philosophical battle, is this the hill you would choose? Or perhaps retreat to more defensible ground
Yeah, and it's getting quantitatively better, all the time. There is a clear progress curve.
Go back and look at the comical quality of GPT2 outputs, and the things people were getting excited about on Twitter. I wrote GPT off at the time because it just seemed like Markov model shenanigans to me, then I tried GPT3.5 and had my mind blown (but was still skeptical due to all the hallucinations), and GPT4 has finally convinced me that this is inevitable, and it's now just an iterative improvement game.
Whether you agree with the article or not, if indeed AI is going to have a future it must eventually produce something of value to the "business world".
but i think that point of view is important for 99% of people that will use AI. since most of them will be potential consumers and the more you can hype and mislead the better for some looking to take advantage. i mean im seeing it with every new startup or company now and im not even that super knowledgeable but numbers do not look correct at all. everyone lives in the "business world"
At the end of the day, there will always be people making sweeping generalizations counter-positioning themselves against the hype in order to drive engagement.
There are plenty of companies capitalizing on the AI hype cycle which won't manage to build durable businesses, but there are also plenty of use cases where AI is meaningfully accelerating people's workflows.
Situations where it's effort-intensive to create something from scratch but cheap to validate the output of a model and iterate on it until you're happy seem to be the sweet spot right now. Code co-pilots, generative artwork, copywriting, etc. Granted, these are all incremental improvements rather than fundamental evolutions to how we do work thus far, so that aspect seems overblown, but writing it all off as smoke and mirrors is disingenuous.
This is where your comment went off the rails. Is it possible the author simply disagrees with you? Or is the future of AI so clear that the only reason a person could disagree is because they're driving engagement?
> Things like text to speech or image recognition previously required a team of 10 ML engineers a couple of years to build now take a weekend.
---
> This might well be the most fraught moment in generative AI’s young lifespan.
> an anti-generative AI cultural backlash is growing
> to describe the generative AI world’s shadier products and promise
> it’s still remarkably nebulous as to what generative AI systems actually can and cannot do
> and the contexts in which generative AI and the magic lantern
> Report after report indicates that generative AI services are underperforming
> that propels the generative AI industrial complex onward
> And why a large percentage of Fortune 500 companies are reportedly using generative AI.
> the product of generative AI’s smoke and mirrors:
> is a threat to the generative AI enterprise at large.
> generative AI desperately needs that mythos.
> Unlike trends of the very recent past, generative AI has real gravitational pull
> Even critics admit there are some good uses for generative AI
> And so, with generative AI, we’re once again witnessing a core problem with entrusting
---
User: What is "Generative AI"?
OpenZephyrChat: Generative AI refers to the use of artificial intelligence algorithms that generate new content based on input data or a specific prompt provided by users. These systems can create text, images, music, and even software code using machine learning models trained on vast amounts of data. Some popular examples include GPT-3 for generating human-like text and DALL-E for creating original artwork from user prompts. Generative AI has the potential to revolutionize various industries by automating content creation, enhancing creative processes, and enabling more personalized experiences for users. However, it also raises concerns about copyright infringement, accuracy, and the potential for misuse of these powerful tools.
User: Would automated machine transcription or image recognition of existing content count as "Generative AI"?
OpenZephyrChat: No, while automated machine transcription and image recognition can be considered part of artificial intelligence, they do not fall under the category of generative AI. Generative AI refers specifically to systems that generate new content based on input data or a specific prompt provided by users, whereas these other applications are focused on analyzing existing content for various purposes like search engine optimization, accessibility, and content moderation.
User: If somebody responds to an article about the harms of "Generative AI" by purposefully conflating "Generative AI" with systems such as machine transcription and image recognition, how should that be interpreted in regards to their argument being made in good faith or bad faith?
OpenZephyrChat: If someone responds to an article about the harms of "Generative AI" by purposefully conflating it with systems like machine transcription and image recognition, it may indicate a lack of understanding or intentional misrepresentation. In either case, this kind of argument would not be made in good faith as it fails to accurately engage with the specific topic being discussed. Conflation can lead to confusion and misunderstanding about the potential risks and benefits associated with generative AI technology, which is why it's important for people to have a clear understanding of what these systems are capable of before making informed decisions or opinions about them.
I did not actually expect it to be quite so direct. Heh.
No matter how much the chattering class wants to portray this as a "fad" and not an inevitability, those of us who can type and read can see the reality for ourselves.
The tech industry is guilty of fanning the flames of hype to get funding. A lot of people are rightly annoyed over some of the more outlandish claims that were made over the past 1-2 years.
Such as what? I think this is like a massive invention and presages massive changes in the world at least on the decades-timescale. Big claims are warranted.
Come on man I'm not going to go through countless blogposts from a year ago and then argue with you about them if they were overhyping AI or not. But if you recall the discussion a year ago was apocalyptic, both for the human race and software engineering jobs.
Yah-- what we have now is enough to be enormously disruptive as it is absorbed.
The big question is how much better it's going to get. We may be plateauing in capability-- or at least slowing in capability growth-- or we may not be.
But it doesn't need to get better for it to eat a lot of peoples' lunch.
My opinion shifts every other day it seems, recently, Roon, the "famous" anon X account that works at OpenAI, and presumably has access to some unreleased info, said, in response to "are we plateauing soon", "not even close". I honestly don't know what to think.
OTOH Altman has said that next release will be about same advance on GTP-4 as that was over GPT-3. Dario Amodei has said something roughly equivalent - to expect solid gains this year but no reality bender.
I think the real potential gains come when they start extending the architecture, but as long as it's just scaling up and different training/inference regimes, then it seems it'll be more of the same rather than game changer.
Given how connected the whole SF/AI scene appears to be (alchohol + drugs too?), it's hard to imagine a company the size of OpenAI not leaking. If there had been any amazing discoveries there, I think there'd at least be rumors (and not just "they seem to have something called Q* going on").
I feel like for LLM's at least, we've not seen a lot of progress on the high end for a year or so. GPT4-Turbo is a little better than GPT4, but mainly only for things that can benefit from huge contexts.
Even if we are plateauing in GPT number (4,5, 12, whatever), there is still a lot of meat left on those bones.
For instance: Chemists using the AI backwards. Take some phrase like 'vanadium increases the Young's modulus versus silicon in 1040 steel' and then work the AI backwards from that phrase. As in, assume the AI outputted that phrase and see what inputs were most likely to generate it. There may be some real discoveries just by working an AI backwards iteratively.
Simple things like that are still open and attainable right now, it's just that AI is still so young that we really haven't explored all of what they do yet.
>No matter how much the chattering class wants to portray this as a "fad" and not an inevitability, those of us who can type and read can see the reality for ourselves.
Right. Regardless of any "superhuman" abilities, we can talk to our computers now, and they can talk back. The decades old holy grail of HCI has been achieved. That fact alone is going to change everything.
Yeah, chatbots have completely changed the way I program, and I'm not a "self-mythologizing executive and founder in Silicon Valley", also it's not only very useful but also entertaining, quite a lot of fun creating stuff with my friends using Suno or text-to-image models.
I think a big problem looming on the horizon for using llms to help with code is the same "confidently wrong" tone as when they're used as a general search engine. I've seen people blindly follow what they were being told by the llm when, if you'd just read it, it was obviously wrong code (or in one case a json config file). It reminds me of the copy/paste problem from stackoverflow except there's no voting or feedback from others that signify a more correct answer.
LLMs sound so sure of themselves and people think "well i'm dealign with the most advanced technology ever so it must be right...".
On the implementation side of things, it's hard for me to get the non-deterministic aspect of llms right in my head. I put an LLM and RAG system in prod with a team and went through rounds of the usual testing. 99 times it passed but on test 100 it would fail, so you'd adjust the system prompt. Then it'd pass 500 times and fail at 501. Adjust the system prompt, then it would pass 9 times and fail at 10. That system went to production but there's the low level worry in my mind, when is it going to fail to give the correct output? The fact that you can never guarantee the output of an LLM from a given input severely limits where they should be used IMO. I don't think it's wise to have the output of an LLM be the input to another program, there's no functional relationship between domain and range with an LLM.
That problem is usually met with "well, a human would make the same mistake.." but the reason computers exist is to do long, tedious, lists of tasks/instructions very fast that humans get wrong. Simulating a human, and all those imperfections, with digital logic seems contradictory to me.
edit: Also, just want to point out that the "testing" mentioned in my post was all manually done by humans. You can't automate testing the response of an llm unless you use another model to grade the response as correct or not but then you're right back to not being able to trust that the grader will always act consistently.
Claude 3 Sonnet is free like ChatGPT-3.5 but much more powerful, for what I need to do these chatbots + GPT-4-turbo can do most of what I need, from scripts for managing files to templates, rewriting code, actually solving difficult tasks (when it's too late for my brain to process), the more efficient way to use a library etc...
Try telling it it's a project manager and get it to write a scope of work/spec document first that you agree with. Then send it to a "software architect" to define the frameworks and architecture, and then to a "developer" to write specific files or modules.
Exactly. When I read stuff like this, I'm always wondering "Have you actually used ChatGPT?" Both of the following can be true, and it's annoying when folks pretend it's either/or:
1. At present, AI can be massively overhyped, especially by salespeople who have an incentive to overhype it.
2. Current generative AI has made leaps and bounds improvements in the past few years, and it's present capabilities provide an enormous productivity tool for those who use it intelligently.
I mean, no, I don't think GPT 4 is AGI, nor do I really think it's that close. That said, every time I use it I'm amazed at how uncannily good it is, and it is able to save me a ton of time.
The usual pattern is to
overestimate short-term change and underestimate long-term change.
We're a year since this stuff really got popular, and so I think a thing that's happening right now is many people noticing the short-term changes were overestimated.
yes, exactly. There will be a lot of these types of articles in the next few years, some disappointments along the way, but deep and longlasting change in the tech sector and beyond is coming.
There is the old quote “we overestimate the impact of technology in the short-term and underestimate the effect in the long run”. I watched this with the internet. First it was a toy (before maybe 1991}, then it was massively overhyped (1992-2001), then disillusionment kicked, in the end it transformed a lot of business, maybe not in the way predicted but the impact is undeniable. Same with AI, right now we are probably at the 1992 internet stage. There will be a lot of wrong paths but it’s pretty clear this will have a massive impact.
People were right to be skeptical about blockchain and cryptic, but AI is on a totally different level of usefulness.
It seems some companies adopting GenAI at this stage are doing so, at least in part, because they want to believe (that they can replace workers), rather than out of any sober analysis of what it can actually do. Maybe there's an element of FOMO too, and companies wanting to use GenAI because they hear everyone talking about it.
No doubt LLMs will continue to improve, but what remains to be seen is if there will be a direct path from LLMs to AGI (which is where the real value gets unlocked), or if we'll just continue to see quantitative improvement in benchmark scores, hallucination reduction, etc, but not much qualitative change in the types of task they are capable of.
I do think that AGI is inevitable, maybe not even that far off (but certainly not next 5 years, probably not next 10 years *), but there's a lot missing from LLMs to get there, and it seems that at least one critical piece, online learning, may require a different approach.
* Note that it's already been 7 years since the transformer paper came out, and all we've really seen since then is a bunch of engineering work in making them more efficient and how best to train them. We haven't yet seen any advances in "cognitive architecture", or even any widespread recognition that there is a need to do so. If all people are doing for the foreseeable future is building pre-trained LLMs, then that is all we will get.
> a bombshell scientific paper recently cast serious doubt on AI developers’ ability to continue to dramatically improve their models’ performance.
This is the problem with this type of (AI isn't real, I promise!) writing. Each and every single time. This bombshell scientific result that he links to? Is Gary Marcus lamenting about a single paper, specific to a single architecture, which primarily shows how quickly algorithmic improvements are bringing down energy costs[0].
We get it. You don't like AI. But you don't have to lie about it, unless of course you are Gary Marcus and this is how you talk your book and earn your speaking fees.
This article, like so many AI hit pieces, fails to make any credible claim against AI. This one spent far too long propping up a tenuous analogy to smoke and mirrors.
Is current-gen AI valuable? Obviously. Look at how many people say as much and willingly pay for the services.
Is current-gen AI worth the hype? No, not really. Current models are impressive but highly limited, as any user can attest (and as any hater will focus on).
What the naysayers always miss is exponential growth. They don’t look towards the future. Nvidia’s stock is high because of hype, yes, but also because the market is anticipating huge advancements in model ability. Current models are limited, but future models are world-changing, and we have every reason to believe models will continue to scale in this way.
The hype is the part where you start talking about the future. We have little idea what will be in GPT-5. OpenAI hasn't announced anything. Sam Altman does hype it up sometimes, in very vague terms.
> future models are world-changing
Possibly, but talking about what you imagine the future will be in the present tense is hype. That's not the future, it's speculation about one scenario. There are others.
There's been a lot of progress making models that aren't quite as good as GPT-4, or are maybe comparable. Why is that? Have we hit a wall? I don't know, but that's a scenario, too.
Future models are world-changing, and we have every reason to believe models will continue to scale in this way.
So how do you know this exactly?
What the naysayers always miss is exponential growth. They don’t look towards the future.
This is quite a pat response. There are plenty of critics within the AI/ML field of the unbounded growth hypothesis that you are effectively fronting here. I don't think you really believe that they "don't look towards the future", or that they aren't aware of the concept of exponential growth.
I'd be interested in a cohesive response to their arguments. But in the above post at least, I'm not seeing one.
> What the naysayers always miss is exponential growth.
Exponential growth isn’t _normal_, tho. I think that Moore’s Law has messed us up a bit, honestly; there is a tendency to expect _everything_ to behave that way. And so far, with LLMs, it just doesn’t seem to be happening. When you double the compute, you double the amazingness of the LLM, right? Well, actually, no; based on current results there’s little reason to think that, and if that doesn’t hold your exponential growth is in _serious_ doubt.
> but also because the market is anticipating huge advancements in model ability
Markets anticipate all sorts of things which never, in the end, happen, remember.
> Current models are limited, but future models are world-changing
See, this is why people draw the comparison to blockchain stuff. “Yeah, the current thing’s a bit shit, but just wait for the next one. In the meantime, can I convince you to invest a hundred million in my robot tax advisor?”
> and we have every reason to believe models will continue to scale in this way.
Do we? I mean, to me it looks like it has clearly slowed down, already.
I think the AI itself is working extremely well. The state-of-the-art LLMs are exceptional at many real-world tasks and are deployed in many production code bases successfully.
The more recent AI failures (like the humane pin) are at least partially a hardware problem. These LLMs require huge amounts of compute to do nontrivial tasks, and a wearable just can't run locally with the current architectures.
Exactly. Yes, there will be failures along the way, but the AI tools we have now are still pretty amazing and really would have been called magical 5 years ago. And yes, the AI tech already available now is already causing disruption and will continue to disrupt. Anybody saying this is just a 'blip' or 'fad' are really not trying them out.
I remember in the 90's someone telling me to not bother with Java (back with applets), because this 'Internet Thing' isn't going to pan out. That is how the anti-AI sentiment sounds to me. It's already amazing and continues to grow. There is not pulling it back now.
People see legitimately novel and exiting new things computers can do. The Problems is that those things are extrapolated into the future based on insanely optimistic predictions.
Up until now every single innovation has had times of rapid advances and adoption followed by a slowing down and stagnation. Believing that artificial neural networks aren't subject to that trend is, again, extremely optimistic.
The greatest folly, I believe, is people who look at arbitrary problems and imagine AI solving them. This will not happen. Yet I see that thinking again and again.
Artificial neural networks are good at certain things and bad at others.
OpenAI has succeeded massive on making people believe that it can continually improve on basically every problem at an accelerating rate. If that is true, it is obviously the most valuable company in the world. But if artificial neural networks behave like any other innovation there will be a point at which implementation is impractical and improvements will be limited.
IMO, the struggles with AI in robotics exposes where the current AI bottlenecks are more clearly.
Specifically, we don't have a training data corpus for any of our AI robots, enough to help them perform generalized task behaviors. So we have to go out and create them / use non training techniques, which crucially cannot leverage the existing learning / movements of humans accomplishing things.
ChatAI's training data of "generalized text on the internet" mirrors exactly what it can do right now — know a bit about everything. But there lacks computer-readable corpuses for the things that are truly useful — 1) efforts that lead to a task accomplishment and 2) better-than human efforts that lead to better-than-human output.
The key question is this: can AI reach beyond the human dataset that is currently powering it, crucially, at low(ish) cost? Or will AI always be fed training data? If we think the former, then we have to admit our algos aren't quite there yet and this iteration of AI is selling that dream.
But even with our current data-fed AI algos, we have not yet put a lot of effort into the root of the challenge - how to quickly build large sets of high quality training data. I think there can be optimization here (over a period of 10-20 years) on a similar scale of going from million dollar computer mainframes to hundred dollar laptops. If high quality training data for specialized efforts were indeed available at a low cost, then the AI revolution will effectively deliver the dream it promises - just wait for Moore's law.
The first step to solving that is an extremely polished LLM. You can see the proof of concept with SORA and other image-to-text efforts on how quality training data is created with the help of LLMs.
But of course, "LLM generated great training data" doesn't draw headlines, so it may just look like nothing is happening for a long, long time, until you see AI task bots pop up everywhere.
Look, when I can dump a function into GPT4 and tell it to "refactor the following function with an emphasis on maintainability and testing" and it does a pretty damn good job of refactoring it, I have a hard time believing that this is "smoke and mirrors".
>AI’s smoke and mirrors moment - Because of course, AI is not inevitable.
Long term it seems pretty inevitable to me. Maybe the current LLMs are overhyped and there will be an investment slump before the next improved algorithm but long term it's here to stay.
It's probably a bit like the early days of electical power. Some of the early gadgets were no doubt rubbish but electrical stuff was there to stay and become a normal part of the world.
I do think they are right about AI corrupting infrastructure of the internet, Google is pretty much useless if you aren't trying to buy a product and is full of SEO'd to hell AI generated content that is just terrible or completely lacking information.
It has made job seeking a pain in the ass and is leading to tools of manipulation on every side, nothing is real, everything is a grift now seeking short term gains while playing scorched earth with everything long term.
It seems hard for people to take a nuanced approach that GPT-4 level models have in the present the potential to improve many people's lives and corporations' bottom lines while still being cautiously pessimists on the next generation of models.
GPT-4, Claude 3 & Co. are simply too useful for certain coding tasks or to review a contract. Obviously, you need to understand you're dealing with a probabilistic being, and for many tasks it isn't the correct tool, but I use ChatGPT ~5 a day and the $20 are super well spent. Now, there's an ocean apart of me liking to use ChatGPT and the promises from Silicon Valley and Microsoft.
"Please write the first couple sections for an article based exclusively on the content below. Be direct and concise, but dive into details. Use a personal tone with “you” language. Write in the style of blogger Brian Dean. Use short paragraphs for easy readability. Change the wording so we don’t plagiarize the source material. Make the headlines somewhat catchy and intriguing while still concisely communicating what each section is about.
- AI, also known as artificial intelligence, is overhyped
- AI will never be able to replicate what a human can do
- AI will never replace humans
- AI is a trend that will die out
- AI is not really artificial intelligence, just a trick"
The Hype is smoke and mirrors, but there are tons of real use cases where things were MUCH easier than before. Things like text to speech or image recognition previously required a team of 10 ML engineers a couple of years to build now take a weekend.
It's smoke and mirrors to the "business world" because none of these translate into bottom line numbers though, and I think that's all this guy is able to see