Hacker News new | past | comments | ask | show | jobs | submit login
I'm Bearish OpenAI (stovetop.substack.com)
110 points by herbertl 5 months ago | hide | past | favorite | 103 comments



It’s not just OpenAI. I think AI winter is coming much sooner than expected. Throwing more compute at the current solution is not yielding better results it seems. Reasoning will not be solved by a bigger LLM


On what evidence are you making this statement?

We don't even know how existing LLMs work, not really, yet we're done?

All signs are pointing to there still being no upper limit on results. Not yet.

Just because it's not on the leaderboard yet, doesn't mean we've heard the bell.


The upper limit is on human text, especially of very high quality and original. But if LLMs use other sources of learning signal, like AlphaZero who trained in pure self play, that has no upper limit. It's just a slow, iterative process, and it requires social aspects.

The path ahead to AGI is deceitful, there are no gradients leading towards it. It's based on exploration and discovery. It works in populations for reasons of diversity - evolution is that way. And evolution is a slow process, not a singularity kind of event.


I find the presumption of an AGI to be an article of religious faith. I realized, at some point this past year, that silicon valley is a religious center emitting theology and doctrine.

I am an apostate. Technology does not have any innate vector for linear development.

The choice to amplify dystopian social trends remains a wrongful choice.

So-called AI is bad for humanity and the planet, in finite terms at this moment in time. It's essence is wrongful.

Turning bad ideas up to 11 does not make them good ideas.


Of course "we" know how LLMs work. Nobody would be able to make one, otherwise.


Just because you can make something doesn't mean you know why it's made.

There are thousands of people around the world trying to reverse engineer what is going on in the billions or trillions of parameters in an LLM.

It's a field called "Mechanistic Interpretability." The people who do the work jokingly call it "cursed" because it is so difficult and they have made so little progress so far.

Literally nobody can predict before they are released what capabilities new models will have in them.

And then, months after a model is released, people discover new abilities in it, such as decent chess playing.

They are black boxes.


I predict that this is largely an illusion staged by the lack of publishing of the datasets and training regime used.

Also an artefact of how evals have been done on a pass fail basis. So that an LLM that gets 90% of a question right is just as much a failure as one that gets 0% of the question.

So that skills appear to emerge suddenly and surprisingly only due to the flawed way that we are forced to study them. Consider the training regime, and partial success towards a goal, and emergence is far less prevalent. There was a paper on that recently, I'll see if I can find.


Until <5 years ago, AI was almost entirely a purely academic field, theoretical at that.

Those same academics admit themselves that they're surprised at how well LLMs do considering how simple(?) rudimentary(?) the logic underneath is.

I don't quite understand what you're saying. That these academics were being lazy by not properly investigating/publishing their findings? That doesn't seem right.


They may be black boxes but that doesn't change that they are operating on statistics. I see no evidence that "AI" so far is anywhere near to cracking reasoning. It doesn't matter how magical their inner-workings. They have been trained to spit out plausible text and images (and more limited, video).

All of that is imitation, nothing near thought.


It's very commonly understood by those of us who actually produce and consume AI research that "knowing how" LLMs (and Neural Nets for that matter) work doesn't mean knowing how to build one. It means mathematically proving and understanding "how" the steps we put the LLM through when training are able to produce the output we get when testing.

We know how to build it. We don't understand how it's producing the output it does based off what we give it


This still doesn't make sense to me. As far as I'm concerned the gold standard of understanding something is being able to construct a program that replicates it, which is exactly what we can do with LLMs.

We know exactly how llms work (relatively simple maths), and to a large extent even why they work (backpropagation updates weights to more closely approximate the desired function). There are open questions relating to LLMs of course - we don't understand what the space of potential LLM-like things looks like and how the features in that space relate to subjective performance (although note that transformers were designed based on a theory that they would perform better, not just randomly generated or inspired by the muse). We also don't know to what extent the output of LLMs can be approximated by simpler symbolic systems, or how to extract such systems from LLMs when they do exist. Those are really interesting questions, but they're not questions about 'how LLMs work'.

I dislike the 'LLMs are magic' framing that seems to be taking over the world. Nobody thinks that Taylor expansion is magical, but LLMs are doing the same sort of thing - approximating a function through a bunch of weights on a bunch of simpler functions. Just because the function we're approximating (intelligent output) is not known in advance (but can be sampled), and multi-dimensional does not fundamentally change how mysterious the process is.


> the gold standard of understanding something is being able to construct a program that replicates it

Cloning animals or even humans did not automatically make us understand how brains work. In fact, these were quite unrelated endeavors.

> I dislike the 'LLMs are magic' framing that seems to be taking over the world

Don't take that out on me. That's not what I'm saying. I'm saying there is a lack of determinism (mathematically provable, per se) in our current understanding of all AI (LLM included). There are many attempts to solve this problem. I've sat in on seminars about it myself. So far, we're not there yet


> Cloning animals or even humans did not automatically make us understand how brains work. In fact, these were quite unrelated endeavors.

I agree. It's not copying that I'm saying is understanding, it's modelling.

> I'm saying there is a lack of determinism (mathematically provable, per se)

What do you mean by a lack of determinism in this case?


> As far as I'm concerned the gold standard of understanding something is being able to construct a program that replicates it which is exactly what we can do with LLMs.

But we actually can't! We can build a program that can build a program that is the LLM, which is not the same! I'd argue that you're right insofar as training is concerned. We understand training very well. But the actual model, how it operates, what it actually knows, we don't know how to build that, we don't know what weights to put where.


We actually have another example of this as well:

Malbolge is an esoteric programming language designed to be impossible to use. The first program written in it wasn't written by a human, it was written by another program.

But since then, with that working program to learn from, people have figured out how to write programs in malbolge: https://lutter.cc/malbolge/tutorial/cat.html


It's a bit similar to the problem of neuroscience. We understand how a single neuron works pretty well, or even a small count of them. Even a few subsystems like balance or lower level vision. A bit on muscle control and endocrine system.

We do not understand language, grammar, music, only partly emotions, or especially sentience and consciousness. Further, we don't understand how the disparate systems are integrated together.


Great comparison.

The comparison itself is pretty telling; that the brain and AI work so similarly in specific ways


We were making and using electricity before we knew about the electron


The bearish case would be that companies are now putting tens of billions into compute/data to beat GPT-4 - but aren't. However I think the initial ~6 month gap between ChatGPT and GPT-4 set unrealistic expectations on the people time required to turn out a new AI product.


because the amount of data you can train any model on is limited. After training your AI model on the 1st quadrillion images, trillion videos, where do you find the next quadrillion? This is going to be the limitation on top of law of diminishing returns


You release the agents into the wild as robots, to interact with the environment and continue collecting data of all modalities.


Totally, the next stage is that Tesla is copying to copy Unitree G1 robots, and let them loose, and learn and listen from the real world, and this where we are going to make very very interesting discoveries.


Yes, robots will be just walking/crawling/driving "in the wild", on sidewalks and streets. Think about what you are saying.


> After training your AI model on the 1st quadrillion images, trillion videos,

As far as images and video are concerned, we're done. Frame generation is almost perfect, and we don't really need any more training data. Now it's time to build product and enhance how the models work.

LLMs, though? That field appears to have hit a wall for now.

AI for media is going to be a rocket ship. AI for knowledge and text and reasoning will take longer. People will recognize this soon.


> AI for media is going to be a rocket ship.

Is there actually enough data to train an AI on? Photos seem to be a success, like you say, but everything else?

I’ve heard many people make claims about where AI content will be most useful. A persistent theme is an AI made VR world, customized video games, and endless AI generated TikTok videos.

I genuinely question if the training data exists for these use cases. Photos are cheap and easy, but quality annotated 3D models? Short form videos? What about long form video? Are we really there (assuming inference was cheap)?


> Short form videos? What about long form video?

These are being solved as we speak. I'm working on this problem directly and the level of control and consistency achievable is incredible. Video is just a special case of images.

Take a look at the ComfyUI space and the authors of plugins and papers.

> quality annotated 3D models

The research is progressing at a fast pace. We can get good surface topologies, textures, and there are teams working on everything from rigging to animation.


disagree. AI has not even begun to penetrate its potential in corporate America. you could enhance so much middleware stuff with a good unstructured data organization AI and slap voice recog on top of it, you have massively multiplied the effectiveness of a whole bunch of mid level people from operations staff to sales reps, who are very good at their domain but continually stymied by computer syntax and lack of programmers to make the computer do stuff.


The use case you described can be achieved with a subset of NLP, and has basically already been possible for years.

Named-Entity Recognition (NER) and Text Classification will allow you to figure out what kind of text you're looking at and extract structured data.

LLMs are not good at this because they're not specialized for it, but you can build a specialized NER model to extract custom entities from unstructured data today.

That said, I don't really think this is some yet untapped potential of AI so much as an area of ML that just hasn't been applied enough.

ETA: also, in general I think AI is going in the direction of basically just having an LLM route tasks to more specialized ML models (for corporate tasks at least). That's what Google's Vertex AI agents sort of do (and I am guessing the GPT 4 agents as well).


But this is the thing, that requires expertise and infrastructure.

"AGI" LLM's can handle all the nuance, quite simply, they don't need specialist infrastructure or specialist programming, way cheaper, upfront cost. Way easier to scale.

Individual employees can ask it for specific things that make their lives easier and it'll give it to them/or do it. No need to ask your manager, motivate for funding and hire an engineer/purchase new software/equipment.


If I'm not mistaken LLMs contain NER and text classification pipelines so you can leverage the exact same infrastructure.

I can't see how AGI from that perspective _isnt_ just an LLM routing tasks to more specialized NLP models to be honest.

Unless you're proposing that a bigger LLM (training data, neural network, etc..) will develop the emergent capability to accomplish this without the need for specialized agents.

In which case, I can't see how that would perform more efficiently than an LLM routing requests appropriately amongst agents, as it necessarily requires processing much more data.

But even if somehow it did perform more efficiently while needing much more data, I don't think the no-agent AGI approach will cover all use cases appropriately.

It might be an easier drop in solution, but if I need it to behave a specific way in a specific context, I don't see how an AGI is going to consistently do that more accurately than fine tuning a model for the specific use case and having an LLM routing to it.


I agree that's probably how they will work best.my point was that AGI allows everyone to leverage those tools without the steep learning curve or having to custom build each component for each specific use case themselves.


AI winter is more about the outlook of the research. Nobody can deny there are huge untapped opportunities in the applications.


Most businesses do not even have their data game on properly. AI is an iteration of Big Data, and BOTH need proper source material, or the value will be zero AT BEST.

https://hbr.org/2013/12/you-may-not-need-big-data-after-all


You are assuming all stakeholders want the mid level people gone. We already have the technology to automate most of this stuff - bullshit jobs exist for a different reason.


that is the exact opposite of what i said. i said it multiplies the effectiveness of people in the middle of a company. so they dont have to be stuck with middleware interfaces from dead software that hasnt been maintained in 10 years.


I partially agree, there are huge delusions of Homo economicus.

There is also this other process at work that is shaped by quarterly earnings reports in public markets. Maybe if we started over with a mirror economy that we rebuild from the ground up, most of these things could be automated.

I think there are many inefficiencies in the system that Homo economicus wouldn't be able to deal with even if Homo economicus actually existed.


I don’t think it’s going to be a “winter”, but there are definitely some bubbles to burst. Especially when LLMs become half-assed products and the general public’s heightened expectations are not met.


Years ago so many ‘machine learning’ startups failed because their predictions were 90% accurate but 99.99% was needed for businesses to pay for them. These old scars seem to be missing in LLM mania - why will businesses pay for them now when the previous non hallucinating ML models weren’t reliable enough.


I think the biggest bubble that needs to burst is probably the whole "AGI" thing.

The definition of it isn't clear but from what I gather it's basically an aggregate of emergent capabilities that work together to produce a singularity.

Maybe with enough resources it's possible but I highly doubt it'll be economically feasible given how much has gone into it so far and how far we really are away from something like that with current models.


In my opinion you could make an LLM 100x bigger and it would only get better at generating the next token in a sentence. And everyone knows that the best sentences are not constructed by the most intelligent people with the most accurate world model, but by the people who are best at constructing sentences. It's a dead end in terms of real intelligence and reasoning imo.


People that make the best sentences don't necessarily make the world go round. In most scenario's, a barely adequate sentence is enough to keep the world turning..


AGI, in the sense you mention, is an imagined/hoped-for supreme-power that will save/destroy us all (or maybe just the "worthy"/"unworthy" ones).

In an age of such hopelessness about the future, this looks a lot like an emotional crutch wrapped in the veil of rationality - just the thing an anxious materialist needs to make sense of the world.

Like many cults and religions it mistakes the plausible for possible and possible for probable.

The problem with religious beliefs like these is that they don't just disappear with evidence or sufficient reasoning.

I don't think that particular bubble is bursting anytime soon.


Well, that's one definition but I think most people are thinking of general intelligence like humans have rather than godlike. That is more doable.


If that's the goal then perhaps we already surpassed it and I personally am not impressed.

It's useful but basically every method of quality control requires a human.

I've found that components of general intelligence specialized beyond human capability are much more useful than a model that can mimic a human.

I think an LLM is just trying to do too much at once, all of the individual NLP algorithms most of them are made of are very useful to us, but an LLM is just not specialized enough to be any more useful than a human without specialization.

Which isn't to say they're _useless_, but obviously not as useful as a specialist (in special contexts, denoted by whatever kind of specialist they are)

ETA: as an aside, I'd like to contextualize my presumption that AGI is about AI singularity with the fact that Sam Altman casually stated that he doesn't care if it takes $50 billion to reach AGI.

In the real world, with 50 billion dollars, you can do something much more useful than trying to build a product that's basically contradictory by definition.

An AGI is (presumably) a general intelligence model but it's implicitly touted as being extremely useful for specialized tasks (because, humans can specialize), but once you specialize, I would argue your general intelligence tends to weaken. (For example I wouldn't expect a Harvard PhD to be 100% up to date with modern slang terms, but I'd be shocked if I went to a local bar and met someone who didn't know what rizz means).

This is basically just trying to squeeze two opposite ends of a spectrum together, which sounds kind of like a singularity to me.


Some of the reasons people like Altman get excited are that if AI is as good as humans all round then you can replace the workforce. Also given the way of these things it will get better each year. We'll see.


> people like Altman get excited are that if AI is as good as humans all round then you can replace the workforce.

I get that. I guess my point is this already seems to exist. We could combine AI with machinery to replace almost everything humans can do already, someone just has to build for that solution (e.g. train some models).

AGI just sounds like a sort of automation of that process. And I don't think a bigger LLM will accomplish that task. I think more developers will.

Which I wager would be cheaper and arguably more fortuitous to the human race than $50 billion thrown into one pot


Current AI is pretty patchy in it's abilities. Chess is great, chatbot stuff has recently become quite good, but hook it to a robot and tell it to pop down Tescos to get some milk and then come back and tidy the house and it's hopeless!

But yeah developers are needed, a bigger LLM won't fix everything.

The money's a funny one. Global GDP is about $85,000 bn/yr so if someone can spend $50bn on getting AGI and taking it over it's a bargain. But if you spend $50bn and just get a loss making chatbot then less so.


The use case you describe hardly describes what most workers do though. That's a robot Butler, not a desk worker who takes calls and fills out forms based on what the customer on the other end of the line says, or a factory worker (where automation has already been replacing tons of dangerous jobs without AI since the advent of engineering really).

Also, I still think you can probably build something (or rather, many, many somethings) with existing tooling to accomplish exactly that.

> A bigger LLM won't fix everything

I'm not sure if there's a camp that says it probably won't fix anything, but I'm in that camp if it exists.

If you think about how humans actually work, I think a basic, non AGI LLM routing information to different agents/models is closer to how most humans behave (when productivity is their goal).

E.g. a person's behavior is driven almost entirely by the current context they are in most of the time.

It's not that our minds become overexcited by loads of previous information and we magically are able to do other specialized tasks, we decide based on context what specialty in our toolset best fits the scenario.

> The money's a funny one. Global GDP is about $85,000 bn/yr so if someone can spend $50bn on getting AGI and taking it over it's a bargain. But if you spend $50bn and just get a loss making chatbot then less so.

If that's true then the same could be said of just dumping $50 billion into grants/research/funding for education around AI so that developers worldwide have an easier time developing AI enabled technologies and services.

At least with that plan, there is extremely little risk of creating nothing more than a chatbot (and extremely low risk of tech companies monopolizing labor the same way they try and monopolize everything else; I don't have much faith that if a few companies automate all or most labor that they'll redistribute wealth)


It can't come soon enough. Expectations and hype have reached stratospheric levels and are due for a hard correction. Every company is jamming "AI" into places it isn't needed to juice their share price and please the clueless shareholders. Despite LLMs popping up everywhere there's limited evidence to support the claimed productivity boosts. Other than using GitHub Copilot I don't know a single person who seriously uses language models for work, whether it's by prompting directly or pressing a magic "AI" button that performs some RAG.

I've half seriously considered the possibility a large portion of the hype has been manufactured in an attempt to shock stagnating economies back to life, post-COVID, post low interest rates.


How has throwing compute at the problem not yielded better results? Are you denying that generative AI has been improving by leaps and bounds throughout the last few years?


Some people are convinced that because we don't have a next gen LLM model (i.e GPT-5) 14 months after 4 and that the best 3rd party models are 'only' GPT-4 level, there must be a plateau or something. Nevermind the 33 month gap between 3 and 4 or the 18 month gap between GPT-3 and the first 3rd party >= GPT-3 level model (Gopher).


Yeah this is like saying after the Intel Pentium came out that welp no one has built an exponentially better CPU in the last year so we’ve peaked as a CPU building species. Even AMDs latest only brought it up up to the same level... CPU winter 1994


I actually think it's worse than that. HTML5 came out in 2008, so nobody could possibly come up with a new internet business until we progress to HTML6! So without a new release of the language, all development stops!


4o uses less compute and performs better.


The main question is what will be the Venn overlap between AI Winter 2.0 and Nuclear Winter 1.0?


To quote Geoff Hinton

"AI winter !!!!????"


What did he mean by that?


https://en.m.wikipedia.org/wiki/AI_winter

It's been a pattern in AI research since the 70s. Sure, the current boom is unprecedented, but that doesn't mean there won't be a relative bust. AI winter doesn't mean chatgpt will disappear. It just means research funding may get significantly scaled back of the hundred billion dollar investments of today don't generate trillion dollar returns


I know. What did Hinton mean by his comment, though?


After spending more time researching than I really should have done I found the quote. https://x.com/pmddomingos/status/1728628326968021100

He was expressing doubt that there is an AI winter near. He's more of the:

>Geoffrey Hinton, dubbed the 'Godfather of AI,' warns technology will be smarter than humans in five years"

school of thought. https://www.dailymail.co.uk/sciencetech/article-12610845/geo...


I’ve sure seen better results from throwing more compute at the problem and we’re going to have a lot more compute in 20 years than we have now.


thats why microsoft want to build supercomputer to do it, You know that microsoft is backing Open AI right.

They dont want lose to Apple,Google and Meta


This is such a bad analysis, I don’t even know where to start. Let me start with the strongest point that this article tries to make:

1. Big Tech will catch up to OpenAI, and yet one year after GPT-4 release, no one has caught up to OpenAI, in fact the only model that beat gpt4 on lmsys is gpt4o lol. This is on benchmarks, if you see in User Space, OpenAI is > 80% and now with gpt4o being released for free, (there’s no other model that comes even close that’s available for free) they will cement their lead even more.

2. Some weird argument about compute becoming prohibitively expensive. MSFT has already promised to build a 100 billion data center. If your bar for judging success is AGI then maybe, but with the compute we already have a useful AI that generals billions in annualized revenue for OpenAI. So I’m not sure why you’d think this won’t keep improving for at least the next decade. There are so many jobs that can be automated before we reach AGI

3. OpenAI lost cracked researchers. OpenAI lost people in the super alignment team, none of the top research leads who implemented GPT4 have left. Ilya arguably was just coasting for the past few years (he’s not credited with any specific contribution in gpt 4 or 3.5). The most capable people in OpenAI are still present at OpenAI

so yeah not a great article


Agree that it was not a great article, but I think the main point he made was that the commercial winners will be those that can deliver to customers. OpenAI with its relationship with Microsoft will have exposure to that but even a lessor LLM that is tightly integrated with your gmail and the google suite, or is running privately on your iPhone with your iCloud data, or is 'free' will be the winner. Apple, Google, and Meta can spend 10s or 100s of billions of dollars to build AI tools to support their other profit making products. I'm a happy paying OpenAI customer, but the majority of the world (where the money is) will happily use the AI that comes with their phone, search engine, or word processor.

> The reason for my bearishness is simple: OpenAI, the software company, will ultimately lose to Apple, Google, and Meta. OpenAI, the hardware company, will also ultimately lose to Apple, Google, and Meta. Their only hope is to be the first to AGI.


> Apple, Google, and Meta can spend 10s or 100s of billions of dollars to build AI tools

If money was the only issue, why did Google fail in Chat, why did Apple fail with Siri, Meta failed with Metaverse.


My point was they can spend billions and fail on projects and continue being profitable, they don't need to make commercially successful, better AI products than OpenAI to succeed. Apple could release an iPhone based LLM agent that is 50% as good as OpenAI, Meta can spend billions developing an LLM to use internally and give away.


> My point was they can spend billions and fail on projects

Now you can understand why ChatGpt was the fastest growing consumer app in history and why they probably will hold that lead for a while.


> and yet one year after GPT-4 release, no one has caught up to OpenAI

You're wrong, Claude 3 Opus beat OpenAI’s GPT-4 in Elo rating just a month ago. They since lost the top position, but they are at ELO 1246 compared to GPT-4 at 1250 and GPT-4o at 1287. They are bottlenecked at GPT-4 level with small variations. That's the issue. Why have they stopped advancing? The gap between GPT-3.5 and GPT-4 is so much wider than that between GPT-4 and GPT-4o

https://medium.com/@simeon.emanuilov/claude-and-gpt-4-top-le...

The explanation is that we have exhausted the sources of high quality human text. All providers train on essentially the same corpus. Architecture variation doesn't matter. Data is king, and AI used up all the human data.

My take on the path forward - it is easier to imitate than to innovate. LLMs up to now are mostly imitative. In order to become innovative they need to learn from things, not just from people. It's a social process, based on evolution of ideas. It's not unlike DNA and language, intelligence is social. It takes a village to raise a child, it takes a whole world to train an AGI. Humans also take 20 years of experience based training before becoming truly innovative at the edge of human knowledge.


>The gap between GPT-3.5 and GPT-4 is so much wider than that between GPT-4 and GPT-4o

Why is this an argument? GPT4o is obviously a much smaller model than GPT4 and is not meant to replace GPT4. It's meant to replace GPT3.5 as the free version. The fact that OpenAI can make a much smaller model and outperform or perform on par with the original GPT4 is a good sign.

OpenAI is still working on the actual GPT4 replacement.


> OpenAI is still working on the actual GPT4 replacement.

You speculate.

...but, what we can evaluate is what we can see, and what we can see is that there’s now only a marginal difference between what openAI actually has available and what other vendors actually have available.

What we havent seen, is a demo of something that is an order of magnitude more capable, aka. GPT5.

Not a demo. Not a post. Not tweet. Not a hint of something concrete.

Just some vague hand waving that it’s busy baking, take it on faith, it’s gonna be great.

…but let’s be really clear here; what they actually delivered is not great, it is less capable. It is smaller. It’s a consumer model for the masses to talk to.

So, in that context, you have to take in faith that openAI is going to deliver a GPT5, before anyone else.

…but that’s a hunch. That’s a guess.

…and they have some very stiff competition; and it’s very very much not clear to me they’re going to the first ones to drop something that is concretely better than gpt4; because gpt-4o was a big disappointment.

> The fact that OpenAI can make a much smaller model

Everyone is making small models.

That is not something I am impressed by.


GPT4o didn’t even need to be better than GPT4 in any benchmarks. Being conversational and running in real time, is such a big difference user experience wise, that the fact that they beat GPT4 is just cherries on top. And it’s almost quasi confirmed that GPT-5 has been trained and that it’s undergoing red teaming and safety checks now and it will be out in the summer, further cementing their 1 year lead.


I couldn’t read any more once I got to:

“Chipotle, an $87B company that makes food that you can easily make in your house“

I’ve made carnitas; it takes about six hours.


As a single person, making Chipotle at home (which I've done) is both time consuming and inconvenient.

Chipotle's menu has a _lot_ of fresh ingredients. You can't really prep any of it more than a few days ahead of time, with the exception of the proteins and maybe fajita veggies/pureed salsas which you can freeze.

Everything else has to be fresh, and it would take at least an hour or two to prep + cook: the lettuce, cilantro lime rice, guac.

Or, I can just go to Chipotle and pay $15 (unreasonable for many, but I've prioritized healthier eating) for a meal.


I was "trained" by Blue Apron and I do OK, but making Mexican food has been a pain in the ass and always a failure. The balance of flavors that you have to hit is just so... precise. It seems simple but it's a very hard cuisine, and I envy people for whom this kitchen is a breeze.


I had this exact conversation with a friend of mine who was a lot more optimistic about the tech than I am and he agreed. Big tech is going to eat these guys and we're going to be left with some useful foundation models after we get over the trough of disappointment.


> Big tech is going to eat these guys

Like what Apple did with Siri?


Eliminating creative jobs is not progress, by most people's standards. Its a beancounter fantasy, but causes us to question all roles further up the chain as well. Perhaps someone can make c-suite jobs obsolete instead?

Perhaps computers can have sex for us as well, they may even perform better.

At some point one has to confront the human mammal, society, and the planet that we are a lifeform on, that we are rapidly rendering uninhabitable for no reason other than boring ideology.


> Eliminating creative jobs is not progress, by most people's standards

There are always stated preferences and revealed preferences. Before mass market production, we had artisans to cater to the apparel needs of everyone - clothes, shoes, bags you name it. Most of those creative jobs are now replaced by industrial manufacturing. And guess what - everyone just buys the mass market products. I don't see anyone lamenting the loss of village cobbler or tailor who had only a handful of offerings at insanely high prices.


Just to counter your absolutist point: I do. The imperfections and the story and everything make a thing more alive. In general, in most of the cases I’d prefer an item made by a single hardcore artisan or a small team to anything factory made. There are natural limits to this (airplanes), but for most of the things this definitely stands.


I, for one, don't like American canned comic books or IPs produced by faceless corporations and prefer artisanal Japanese manga.


if it can "make GDP go up" then it is good. it's a political question how to distribute the economic surplus. it always was.

similarly, it's up to us to a large degree where we spend our money. if there are going to be enough people to want non-AI art, then there will be a number of creative jobs.


> if there are going to be enough people to want non-AI art

Well, maybe this will change one day, but now I don't know a single person who wants "AI art". After the initial hype with magic images created with Midjourney wore off, it's left what it is: boring, repetitive, a few similar looks, cheap. Just like Chinese plastic goods.


Not so may people want "AI art" (branded as such). But a good chunk people use generative image model to "commission" works that they are interested in - but would be to expensive to get a human artist to do it. There are many niches/subcultures where this is popular, from anime to cyberpunk, to fantasy, computer games, role playing games, etc.

It might be that there is general appeal to be found in images created with AI tool (by a human tool-wielder, the actual artist). But people want to buy art from human artists, have a back story - for there to be a "meaning" - a why. Some initial research: https://www.sciencenorway.no/art-artificial-intelligence/peo...


> I don't know a single person who wants "AI art"

Heck, I don't even know a single person who wants real art apart from 20-50$ wall hangings from Amazon or Etsy (all mass produced), or cheap souvenirs when they travel (mostly made in China).


> boring, repetitive, a few similar looks, cheap. Just like Chinese plastic goods.

And yet cheap Chinese plastic goods are literally a hundred-billion dollar industry.


Yes, definitely, and there are people who use them, mainly because they can't afford a more durable version or don't care. This is the niche for the current generation of generative AI for art: cheap, boring stuff that some people dislike, some ignore, but it has its uses.


>if it can "make GDP go up" then it is good.

Not always. If you have a car crash and pay medical bills that raises the GDP by the amount of spend but is not a good thing.

I'm not sure how well GDP accounting works with AI.


> If you have a car crash and pay medical bills that raises the GDP by the amount of spend but is not a good thing.

I mean, in an alternate reality where car crashes still happen but the person dies because of the lack of medical care, then it is a bad thing. That we invented medical procedures to deal with car crash victims is a valuable offering to the society and GDP is a crude way to capture such value.


Very crude.

Unusable infrastructure counts towards GDP as well, if I'm not mistaken.


That argument also worked very well when it was "If there are enough people who want smoke-free bars and restaurants, there will be a number of them".


and it will be interesting to see the second hand effects of AI in art galleries.

where I'm from restaurants had separate dining rooms (and now there's an indoor smoking ban), and most restaurants and bars in general are bad cheap places. if AI crowds out the cheap creative shit (from advertisements to mass produced plastic "home decor" to ad-supported "free art" and so on) I don't see that as a problem.

... furthermore, the cost-benefit balance of smoking - with a high certainty - seems to be negative. we don't know AI's yet.


Imagine working toward a milestone that has no final, universally-accepted definition. How does one know when this milestone is reached. Then imagine that the people tasked with reaching the milestone are also the ones responsible for creating a test for whether it has been reached.


I’m myself not bearish, but I think seeing what companies do with the current OpenAI APIs is pretty disappointing. I found out today that Logi+, a software to manage your Logitech devices has a “prompt builder” and can help you write your emails?! Couldn’t they find something more interesting to do with something as powerful as GPT4?

https://www.logitech.com/en-us/software/logi-ai-prompt-build...

I wouldn’t be surprised if my bank and insurance applications implements exactly the same features. And all of this will be removed in a few months or years when they realized it’s a terrible, expensive idea.

We will likely have a down to earth moment where companies who poorly invested in AI products will want to cut their losses.


the ai reckoning being discussed is bound to happen imo when the current approaches plateau and leave the gen pop disappointed.

in terms of openai vs real, open initiatives, there are two things in favour of the former:

- the startup playbook of burning vc/investor funds till you make it

- the use of human-enriched and/or private datasets, supported by the previous point

i see a couple bottlenecks to current efforts in reaching the best results:

- hardware/software architechture limitations - the levels of abstraction needed for training the largest models are sacrificing efficiency for achieving the newest targets. in an ideal world, all gpus in the pool would be fully utilized, and each underlying hardware used to its fullest. currently i'd wager we are only halfway there. software stack hegemony doesn't help here either.

- model architecture approach vs goal - it is amazing how much can be achieved from vectorizing everything and predicting the next block in the sequence. however, it can only do so much from what we can see thus far. i don't have any answers to what can replace it, but i can at least assert that the current approach does not fully leverage the raw digital data that exists out there.

while we ride this wave, i am glad money is being spent in research and tooling here, and i am bullish about people doing more with their devices going forward.


Man, substack is going for all the dark patterns.

2 full page ads to subscribe and 1 half page ad for substack itself.


The bust of ai will be glorious. I just hope it’s the VCs are the one holding the bag instead of the tax payer or retail investor.


Most AI companies are not building AGI. LLM already has tons of profitable utilities.


How much of it is actually going to be profitable and/or useful over the long term? 1%, 0.1%, 0.01%?

There's definitely going to be a bust. AI hype is far beyond anything crypto has ever seen while the vast majority of it provides very little value.


Most companies fail, AI or not.

It's also hard to say since the numbers are not public. However, some companies do publish their margin.

For example, Interior AI has 99% profit margins: https://twitter.com/levelsio/status/1773443837320380759 -- and it is run by one person.


Why wouldn't you just vouch author's submission? https://news.ycombinator.com/item?id=40377634


Some users don’t have vouch capabilities.


What is vouching?


https://news.ycombinator.com/newsfaq.html#dead

  Dead posts aren't displayed by default, but you can see them all by turning on 'showdead' in your profile.

  If you see a [dead] post that shouldn't be dead, you can vouch for it. Click on its timestamp to go to its page, then click 'vouch' at the top. When enough users do this, the post is restored. There's a small karma threshold before vouch links appear.




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: