Hacker News new | past | comments | ask | show | jobs | submit login
Money bubble (tbray.org)
224 points by headalgorithm 10 months ago | hide | past | favorite | 256 comments



> I kind of flipped out, and was motivated to finish this blog piece, when I saw this: “UK government wants to use AI to cut civil service jobs: Yes, you read that right.” The idea — to have citizen input processed and responded to by an LLM — is hideously toxic and broken; and usefully reveals the kind of thinking that makes morally crippled leaders all across our system love this technology.

As someone who actually had to deal with the government recently in the US I disagree. It was impossible to reach a human or otherwise get an answer to my likely not too unusual question. If they had an even half decent LLM then I'd have probably had my answer and action items for me to do within 30 seconds. Instead I've wasted days in various attempts to get some type of answer.

I recently needed to fix some issues in something I filled with the government. Email support used to exist but probably cut due to budgets. Chat support used to exist but probably cut due to budgets. Phone support has no waiting queue and require 1 minute of entering numbers to hit the disconnect point (due to not available agents). Physical mail seems an option but I don't know the format or address. Etc.


> I recently needed to fix some issues in something I filled with the government. Email support used to exist but probably cut due to budgets. Chat support used to exist but probably cut due to budgets.

With all that said, what makes you hope any government LLM will escape the whims of budget cuts? I'd rather walk in and wait for hours than share a 500 token/sec chatbot with thousands of other users and never get a resolution.


The only thing I've seen that even gets close to working is physically going to the office in person but hell finding what or where that is.

And you can't even do that with Social Security anymore.

If it is something you could be legally liable for, I'd at least send a certified letter to whatever address you can find, so that if it becomes a problem later you can at least show you tried.


> The only thing I've seen that even gets close to working is physically going to the office in person but hell finding what or where that is.

They do try to discourage it sometimes. The local passport office has a sign on the door that says "by appointment only." The first thing you hear upon walking in is "if you don't have an appointment get into line B." If you have an urgent mater they will take care of it without an appointment. I wonder how many people turned around upon seeing that sign on the door. Dark patterns left and right to make it harder to get anywhere.


In the USA, writing a letter to your elected representative (of the appropriate government level, so Senator or Representative for federal, etc) can often eventually get satisfaction, because the government bureaucrats never ignore a letter from them, because it gets their boss yelled at.


And if the LLM cannot solve the case, maybe it can prepare a well-structured ticket that a hoomin can handle quickly & efficiently, without waiting for users to um and aw and gripe and forget things and have to look things up.


Consider a situation where the human operators only get the problems the LLMs can't solve. If LLMs are good but still limited, then the leftover cases will be those that are unusual, difficult, or downright impossible. Think of things like complicated fraud resolution, or the customer needing to change a piece of data on their profile that the engineers never considered could be changed.

If the switch to LLMs is largely a cost-cutting measure for organizations, I could see that the human operators—though downsized—would continue to receive the same compensation as before. In short, they will be paid the same to do more and harder work. If their performance metrics are based on how quickly they can close a case, these cases will never receive the amount of effort they need to get properly resolved. That is bad for the customer, who can't get a strange but pressing problem solved, and it is bad for the employee, who has to work harder at the same rate as before. The only person who comes out ahead is the capital owners.

I've sat with help-line operators for a medium-sized consumer tech company. It seems like 80% of their time is spent troubleshooting very niche issues, with the simple ones sprinkled in for levity. People need wins in order to feel good about their jobs. If it's all difficult problems—at bad pay, then that's just torture.


Exactly!

I doubt solving people's problems is even 10% of the time government customer service spends talking to people. Letting them spend 50% actually solving people's problems would improve everyone's lives.


What about when the LLM inevitably hallucinates a plausible but incorrect answer?


To be fair, a friend I know ran into a string of similar issues when attempting to get her gender changed on her license and passport. The entire system was rife with incorrect advice from workers and broken documentation, which caused several attempts to be rejected (wasting months of time).

The bigger problem with LLMs as they currently stand is that one can easily bully them into breaking outside their normal operation parameters.


this idea is at odds with the way public services work in societies with safety nets.

it has been, and remains to be, the case that the main purpose of certain parts of public services is to give people employment. there is rarely any meritocracy at scale once you get the job.

the reason why we get poor service cannot be completely put down to getting understaffed or lack of budget. while the UK govt has a better public service experience online than many developed countries, this approach I feel is missing the forest for the trees.


Everything I hear about public service in both the US and the UK is that it's understaffed to the point where actual people needing those services are greatly impacted. Mainly due to ongoing budget cuts that long predate AI.


>> purpose of certain parts of public services is to give people employment

That does not sound right. This could be partially true in old days (here in Poland during communists rule) but nowadays all public service has stated purpose that has nothing to do with employment. The purpose can be total b*s of course but almost always has nothing to do with just providing jobs.


I disagree with most of this article. My own anecdotal experience is that dozens of non-tech friends, coworkers, etc. tell me that they are using ChatGPT every day. These are people who are telling me how they use it to draft emails, create marketing material, create sales support material, create education material, etc.

During every other hype boom I have been through that ultimately failed, those regular Joe types either hadn't even heard of the tech, simply didn't care or were actively hostile to it. Comparatively, with the new generative AI people are talking about how much they love it, how they use it every day, etc.

Even the Internet had a bubble that popped (back in the Pets.com days, circa 2001 [1]) and this short-term AI bubble will pop too. I expect the same pattern as the early Internet: an early pop followed by a recovery that leads to massive growth.

1. https://en.wikipedia.org/wiki/Dot-com_bubble


> My own anecdotal experience is that dozens of non-tech friends, coworkers, etc. tell me that they are using ChatGPT every day. These are people who are telling me how they use it to draft emails, create marketing material, create sales support material, create education material, etc.

My reaction when I hear this is that those people are being paid entirely too much money if an LLM can do their job. I think this is where the real economic impact will come from: when managers realize it's just LLMs generating emails to be summarized by LLMs and it's just bots spamming each other with busy work all day. At some point companies will realize it's all pointless and start trimming these pointless jobs, leaving a lot of people without any actual skills.


> My reaction when I hear this is that those people are being paid entirely too much money if an LLM can do their job.

That feels like such an unnecessarily cynical view to me. First, parent comment didn't say they are using LLMs to "do their jobs". Frankly, I feel that if you're a knowledge worker and aren't using LLMs at least part of the time, you're likely being inefficient. E.g. LLMs don't replace my skill as a software developer, but they sure make it faster to learn new libraries/technologies faster.


Every. Single. Time. I have tried to learn a new library with chatgpt it has been wrong and I just went to read the docs myself.

I like that it can figure out my boilerplate, but I wouldn’t trust any info it spits out.


I often experience ChatGPT given outdated information which is wrong at this time, but wasn't necessarily wrong in the past. One big example is a popular framework for Laravel called Filament [0] which released a V3 a few weeks ago. I don't even bother anymore with it because it is useless. However, for example 90% of my DevOps tasks (mostly Kubernetes) are partially done with Kubernetes, either explaining impact or even writing manifests. It is genuinely awesome for that.

[0]: https://filamentphp.com/


FWIW I've had the exact opposite experience with ChatGPT 4. I did have some issues with the code not always being 100%, but I've found it invaluable when I don't know the keywords for what I should be searching for. E.g. if I want to accomplish something fairly complicated in SQL, I'll explain the problem to ChatGPT, and it has always pointed me in the right direction as to what relatively obscure window function or whatever that I should use.


ChatGpT is good for discovering search terms I didn’t already know, but I’ve learned that I can’t trust any information it gives me.

I’ve had it write incorrect SQL many times, and when it is correct it’s not often the best query, so I only have it write sql for one off queries.


> LLMs don't replace my skill as a software developer

Not the greatest example. LLMs fundamentally cannot replace software developers. At the end of the day an LLM is just an interpreter, much like python, but using a different programming language. Any input to an LLM is developing software.

Perhaps the previous comment would be more understandable if phrased as:

"My reaction when I hear this is that those people are being paid entirely too much money if software developers can do their job."


Am I the only one who is aghast at people using these things to write emails, or worse, educational materials? It feels impersonal and shitty to send someone an AI generated email. Furthermore, they lie all the time. Unless you know what you're talking about, there's a large risk to using it as an educational resource.


I think the pets.com analogy is apt. Everyone's running around making "AI-powered" companies (just like everyone was making online shops in the 90s) and clueless investors are throwing money at them because, "AI!"...

Nvidia is lucky though, a lot of big companies will want their GPTs in-house to ensure their secrets won't be used to train someone else's GPT, and that means buying a lot of hardware (could be on a cloud data center too, but, same result for Nvidia)


I'm cautious about picking winners and losers specifically. I remember back in the day all of the search engines: AltaVista, Ask Jeeves, Yahoo, Lycos, Web Crawler, Excite, etc. When Google popped up it was clearly superior to all of them and completely changed the landscape.

In fact, there are few Internet companies from the early 2000s that made it up until now intact. The same may end up true for this crop of AI.

But anyone who was alive during that time and working should ask themselves: would your career have been better or worse if you started getting familiar with Web Technologies in the early 2000s. What if you saw the impending dot com crash and you decided that the entire Internet was not going to live up to the hype?

I don't have a crystal ball but my gut is telling me that 20+ years from now we'll see any short-term market correction around AI as a blip.


> would your career have been better or worse if you started getting familiar with Web Technologies in the early 2000s.

Worse. The only time there was any real money to be made with web technologies was prior to 2000. Plain old boring RPC, serving interesting data to lowly web developers, is where the money has been made since.


Ok, to be fair I should have specified Internet Technologies rather than Web Technologies. And of course there are exceptions to all cases.

But I would wager many people are like me, having made a comfortable income on the back of tcp/ip, dns, bgp, http, html, javascript et al. and it is hard for me to think of very many jobs in the TC range of FAANG companies where familiarity with those technologies is not a requirement.

My point wasn't: you should have been a web developer in 2000. It was: in early 2000s the Internet was the most important advancement in tech and you should have been learning all about it. I am arguing that the same holds for the recent surge in AI technology.


One thing to consider is that hardware companies that benefit from a boom don't always do so well in the long term. Lots of investors in Nortel and JDS Uniphase learned that the hard way in the early 2000s.


What in the article did you disagree with?

The author said that AI was and has been obviously useful, but there’s a lot of dumb money flying around in AI land (to try building things like AGI)


I think it is great that you know many people that use it daily. Can I ask how many PAY for it monthly? Is it as many people that, say, pay for netflix? Outside the tech world, of course. Do you see your friends and family paying for AI services directly? Because the amount of money to provide AI services is at least as great as it is to provide netflix. Probably way more actually.

What companies are making a ton of money on AI? Nvidia? Nvidia makes money selling chips to large, massively profitable companies that are in an arms race to capture as much market share as possible. Or they are selling to smaller companies trying to make a name. But none of those companies are making any money from the AI services they sell. All of them are spending massive amounts of capital.

What happens when we reach an equilibrium point where AI services are ‘good enough’? I’ll tell you what will happen, it will become another cost center for the big companies and all further development will cease once they have eliminated the competition.

Want an example? Smart home speakers. Do you think that Alexa and Google home are the best that they could do? Do you ever wonder why Alexa came out and made a splash and then Google frantically made one of their own, but once market share was evenly split between both companies all new development ground to a halt? It is because they were only going to spend enough to keep the other company from dominating then stop spending. Because there is no way to monetize it. Not really. You can charge for the hardware, but that is a pittance to them. Can you charge for the service? I use my google home all the time for some things, but if they told me they were going to start charging me would I continue using it? Probably not. There is a reason Amazon recently RIF’d a bunch of people on the alexa team.

People say that this is not like pets.com because profits are real, but are they? Or is it some crazy ponzi-like thing where the amazing profits being had by companies like Nvidia are going to dry up eventually. I say it more like Cisco of 2000, where they were making tons of money selling hardware to all those pets.com companies. Follow the money. Once you get to the person/company paying for the service, there is none. Not on this scale at least. I think there will definitely be companies that pay for an AI service, but the amount of total market spend will be somewhat less than the total spend for something like cell phone service or streaming services. You know, like those things that everyone you know, from technical to luddite, from rich to poor, all pay for. I don’t see AI reaching that level of ubiquity. Do you pay for email service? I know that the people on this site do, to some extent. Email providers charging for their services are a niche market. AI is destined to be the same.

Just to be clear, I agree with you that this will be like the internet was. It will change the world. But it is without a doubt a bubble, just like the early internet. And for the reasons that you say - everyone can easily see that it is ‘something’ just by using it. The barrier to entry is very low, just like opening a browser and going to a website was. It does not take a genius to see the potential. Which is all the more reason that dumb money is flowing like water into this bubble.


> I kind of flipped out, and was motivated to finish this blog piece, when I saw this: “UK government wants to use AI to cut civil service jobs: Yes, you read that right.” The idea — to have citizen input processed and responded to by an LLM — is hideously toxic and broken; and usefully reveals the kind of thinking that makes morally crippled leaders all across our system love this technology.

And this will not just be in government, it will be everywhere. The scariest part is that as people start to spend less time developing a skill set, and instead deferring to AI answers, you will cross a point where this problem can't be fixed (because nobody has the skills to fix it and the AI is trained on the outputs of previous generations of humans).

For the "olds" who already have a skillset, this will be incredibly lucrative (as those who can afford to pay to fix it will handsomely). But the potential for this to—at best—plateau humanity and at worst, make it regress, is significant.

The dark humor in all this: we thought AI would get us the Terminator, but instead it's going to get us rapid degeneration.

---

Edit: an addendum, the overall point I'm making is well encapsulated in this talk https://www.youtube.com/watch?v=ZSRHeXYDLko


Anyone who works in AI, especially very closely with models, will tell you that it's not capable of really replacing any jobs yet.

All these jobs being "replaced by AI" are simply being eliminated with the consequences of them being eliminated ignored. Customer service jobs aren't being replaced by AI, companies, like Klarna, are just giving up on customer service and using AI to increase their perceived value rather than reducing it.


Adding a link to Klarna's announcement [0] from two days ago and quoting their summary:

- The AI assistant has had 2.3 million conversations, two-thirds of Klarna’s customer service chats

- It is doing the equivalent work of 700 full-time agents

- It is on par with human agents in regard to customer satisfaction score

- It is more accurate in errand resolution, leading to a 25% drop in repeat inquiries

- Customers now resolve their errands in less than 2 mins compared to 11 mins previously

- It’s available in 23 markets, 24/7 and communicates in more than 35 languages

- It’s estimated to drive a $40 million USD in profit improvement to Klarna in 2024

Apparently it affected a Call-Center's (Teleperformance) stock [1].

[0] https://www.klarna.com/international/press/klarna-ai-assista...

[1] https://live.euronext.com/en/product/equities/fr0000051807-x...


How much of it really requires AI though? I bet the majority, if not all of the support that the AI offers could have been done with some of the non-AI chat flow builders if a handful of smart people got together and actually worked the flows out properly to handle the scenarios.


Quite a lot of call centres, are, from a user perspective, a flow chart with a human serving as a voice-to-computer interface.

However, I've encountered some pretty weird interactions with customer support over the years, including reportedly "The iMac can't do anything except browse the internet" when the demo unit on display behind them was running Nanosaur (a game); "we only support Microsoft Internet Explorer" when the customer support team didn't have that installed on their computers; «You need a Windows PC and an Android phone» from the German PostIdent people despite it being obvious they could talk to us while we used a Mac and that they knew this because they raised the issue spontaneously; and "yes, we will get your internet connection running by the end of tomorrow" from BT (it took them a month or two, by which time I had already cancelled; apparently someone put the wires in back to front).


My fav was a Dell Tier 1 insisting that he could not process an RMA without me gathering info from the BIOS screen on a laptop THAT WOULD NOT POWER ON.

It only took me explaining it 3 times, then telling him to "get a fucking person on the phone that understands tech", which he did and it was processed in minutes.

There's poor training, then there's just plain stupid.

Clarification: I'd given him the Service Tag, so he knew what device it was. He was insisting that I run the diagnostics and report the results, which is even dumber, in the end.


> a flow chart with a human serving as a voice-to-computer interface

Also known as IPoV (IP over voice).


Isn't that the point though? There is no need to make chat flow builders and pay the "handful of smart people" to figure them out.

If AI costs were reaching very high levels perhaps they would try to make non-AI flows for standard processes. But I think that is unlikely given how cheap AI is vs smart people wages.


I think the problem is that building out chat flows, sitting down with customer care and figuring this stuff out, iterating and improving the "if this then that" logic within a flow builder tool isn't particularly sexy work. "LLMs" and "AI" is though.

The cost of using generative AI to answer questions is orders of magnitude more expensive than using flows. Plucking a company out of thin air - Landbot [1] offers both flow and generative chats. For $100 per month you can have 2,500 flow chats, or 30 "AI" chats. That's nearly a 100x difference in cost. The risks are much higher too - with the flow builders if there's a sudden policy change or whatever then someone can just go into the system and edit it - with AI you'd have to retrain the model somehow. There's also no risk of hallucination with a flow based builder.

I'm not saying that Gen AI customer service chatbots don't have a use - what I'm trying to say is that in the real world, business would probably be better served day-to-day with just setting up decent flows in rules based bots. That's unsexy though - it doesn't attract tech talent, it doesn't get people promoted and it doesn't get shouted about in the press. It is, however, probably much better for the environment and the company's P&L (but possibly not their valuation if they're trying to ride the hype train).

[1] https://landbot.io/pricing


That's my impression as well: if customer service had been a priority enough, then these smart people would have been gathered already. But there have always been more important things to do. And now with LLMs, you don't have to find these smart people, and those you already have can do something else important.


Amazon and others already claim they use AI to handle customer support and marketing copy (https://www.aboutamazon.com/news/innovation-at-amazon/how-am...). Air Canada got burned when an AI chatbot made a mistake (https://news.ycombinator.com/item?id=39378235). Legal firms have been fined for hallucinated precedent citations (https://news.ycombinator.com/item?id=39491510).

The technology is clearly flawed. Regardless, a lower cost option (AI) is replacing a higher priced option (human labor). Ten years ago these tasks would have been handled by human employees or specialists.

There is certainly a history of this in the corporate world, when expensive union labor in Detroit is replaced by lower cost workers in Mississippi, manufacturing experts are forced to train their replacements in lower-cost countries, or entire engineering and customer service departments are shifted overseas.


We use some AI for the first level of customer support. It can provide standard answers to common questions and help with routine action requests, but it is backed up by human operators for anything more complex and customers can opt out of the AI whenever they want. From customer surveys, they tend to like using the AI for simple things because it is faster and they get what they need. The like that they can get a human when they need too. If you aren't providing a human fallback, you aren't doing it right.


Say customers only fall back to a human for 10% of more difficult questions.Then one support agent can field as many customer service questions as 10 employees could before. 90% layoffs is great depression numbers. Maybe only one sector, but the sectors it does impact will be hit hard.


Being a consumer myself, I’ve never had a non difficult question.

I’m not calling to check my balance or get directions. I’m calling because your system did something wrong and needs to be overridden


Same. Never once have I selected the “Check my balance” tele-option, though I’ve had to listen to it innumerable times.


It used to be quite handy back in the 90ies :-)


Bingo, and that's the part of AI that I think is overlooked. It might kill some jobs, but it'll never kill an entire sector like the web killed travel agencies.

And the thing is, the more the risk the less likely an LLM is going to be trusted to just make a decision.

For example, do you think an insurance agency would want an LLM to decide, on it's own, claim approvals? Can you imagine the headache for the insurance agency if the LLM approves the wrong claims or denies the wrong claims?

Or for a doctors office, Imagine AI diagnostics without a human. Can you imagine the headache for a hospital when an AI misses cancer? Or diagnoses a false cancer? It's bad enough when humans get that wrong, but now you have to explain to your legal team "We just left up the practice to the stats gods!"


>Bingo, and that's the part of AI that I think is overlooked. It might kill some jobs, but it'll never kill an entire sector like the web killed travel agencies.

To be slightly pedantic, the Web killed travel agencies for individuals who were mostly interested in booking flights, cruises, and big city hotels once making those bookings became easy on an individual level.

Companies absolutely still use corporate travel sites, for reasons good and bad.

And there are various types of specialist tour operators, arrangers of private trips, etc. who have on-the-ground knowledge of specific locations--and often won't even book things like air travel for you.

(More broadly, the Web forced travel agents of various types to add value above and beyond what a travel portal or a random travel agent could do because they were gatekeepers to the needed systems.)


> For example, do you think an insurance agency would want an LLM to decide, on it's own, claim approvals? Can you imagine the headache for the insurance agency if the LLM approves the wrong claims or denies the wrong claims?

You say this, but I sometimes worry that these issues are hand-waved away by decision-makers with, "oh we'll just have another LLM doing frontline claim support to verify these issues." It's the whole XML/violence thing, where the solution to XML-induced pain often ends up being more XML.


I’m not saying my insurer uses AI, but I recently had two dental claims processed entirely automatically by just providing invoices, and answering some questions online. This is in Denmark. Really nice experience.


That sounds fantastic. In the US I usually have to sit on hold for 20m, then explain what I see vs what I expect, have them check with their supervisor, get word that they they agree with me and will pass it back to underwriting, wait a few more days for underwriting to look at it, and then wait to make sure it actually went through.


Can you imagine a lawyer using ChatGPT to draft legal briefs and find relevant case histories? Well...


In both of your examples I could see the model becoming the default, with humans double-checking, if even that. If things go wrong as you describe, humans are pulled into the loop by the humans wronged (unless they give up before). There is a company making money with auto insurance claims: https://tractable.ai/en/products


Oh, how nice it would be to have AI give medical advice.

(From time to time I have some trivial questions, but not enough hundred valued bills, to ask actual doctor)

I agree with your point though, before starting on chemotherapy I would definitely shell out some money for opinion of a live doctor.


That is not true. Jobs have been already replaced, even before chatgpt. Customer support team of 10 is replaced with chatbot and 2 people.

Half of the jobs will be replaced with AI soon. Writers? No need to have huge team, one senior is enough. Developers? Lawyers? Illustrators? Lay off half and replace them with AI tools!


> Customer support team of 10 is replaced with chatbot and 2 people.

My experience, having grown up when all customer service reps were people, is exactly what I stated above: this is just giving up on customer service.

Anyone who has ever called automated support knows this. When you reduce 10 people to 2 people and some chatbots now you simply have to wait 5x as long for customer support.

I worked at startup a few years back that refused to scale customer support so they could be forced to "automate" the process. The result? Customers got completely screwed over, but those customers weren't investors so who cares.

I can't recall a single time in my life were automated customer support solved my problem, it just kept me busy so that the 5x wait doesn't seem as long since I'm trying to navigate the labyrinth of a customer support decision tree to get my problem solved.


ive had several instances where automated support solved my problem. Amazon re-shipped me a product that was stolen off my porch without me interacting with a human. The fact you're oblivious to this doesnt mean it isnt happening.


I’d argue that it’s not automated support at that point. It’s just a feature the software now handles.


If the software handles it then it is automated, that is kinda the point of software.


That is true, it is not about the quality. But replacement is there. And customer is screwed, but who cares?


I suspect the closer you are to the models the farther you are from the decision making.


It's not an all or nothing situation you can put an AI agent in front of real agents, if the AI agent is able to answer the customers question you do not need to escalate to a real human. That could allow 10 humans to now do the job of say 20. Did AI replace those 10 jobs, I'd argue yes.


> Did AI replace those 10 jobs, I'd argue yes.

Did it, or is it like the automated elevator? The automated elevator didn't reduce the number of elevator operators, it substantially increased the number of elevator operators.


Yes, exactly like how a sewing machine helped replace sewers. One person can do the job of 2.


If management judges the AI "good enough" to fire people, I think you can describe those people as having been replaced by AI. People may debate whether it's a replacement of equal quality, but people still got replaced by AI.


And the idea of giving up on customer service isn't anything new. Google and Meta have been extremely successful with this approach. AI is just an excuse for others to follow suit.


Google and Meta have great customer support for the advertisers that spend loads of money with them.


The consequence of almost all software the loss of jobs. I've personally been responsible for many many jobs disappearing with the software I've written. AI is just another form of software and if it is at all useful, and I believe that it is, then it will eliminate jobs. That should not be particularly surprising -- computers have already eliminated entire categories of jobs.

I think the subtle point is that not all humans will replaced -- it's just that a human and AI will be able to do the work of a few humans. Same work, less people.


Not true at all. Hearsay at best.


Unfortunately not only are there plenty of examples in the real world where this isn’t true and people are already being replaced, there’s a larger issue, and that’s the issue of the pace of improvement.

People think AI technologies improve in a linear fashion. But there is nothing restricting this area of technology from non-linear progress.

Consider the failing prophesies of AGI as a precient example of what’s happening:

- It was 2015, and I’m reading articles about AGI being here in 2050 at least.

- It’s 2018 and everyone is talking about a few new research papers but secure in their predictions but maybe feeling like it is skewing towards the bottom end of that range (except for a few inspired nerds — my people — who boldly claim 2035).

- It’s 2022 and suddenly an AI is blowing people’s minds and we have the fastest growing technical product in the history of the world. Predictions are now 2030 for an AGI.

- It’s 2024 and people are debating if AGI has already happened and debating about the definition and many people are calling for an “advanced level” AGI in the next 2-3 years.

My point is that predictions of this technology have been terrible. Just like the worst. People have been off on every prediction by orders of magnitude.

So now every major tech company is blowing all their money towards AI, realigning their business towards hardware and software solutions and we’re in the middle of an arms race the size of Jupiter towards AI technology, and it’s happening across the world but definitely in both China and the USA where the stock market is going crazy and 25% of the entire markets growth is just nvidia’s massive growth (and those gains are basically powered by the leading ai training solutions).

So, the idea that somehow the technology isn’t going to replace people is asinine. This is the biggest, fastest tech wave I have ever seen, it’s growing geometrically, it’s funded by insane amounts of money and has most of the western and eastern world’s technology research focused on it, and has been wildly ahead of predictions from its inception.

Let’s get real about this. This is a bomb going off in slow motion and is set to interrupt employment and radically reshape society in a time scale that almost no humans can physically comprehend.

But more importantly the trend is accelerating and like most parabolic markets that don’t have physical limitations holding them back (like input materials for a gold rush) it could accelerate to literally crazy levels. There’s no restriction but breakthroughs here.

I’m aware of the current reality of the software systems and I know what I’m saying is futurist, but from a trend perspective we are way, way ahead of where we thought we were going to be and the trends point to railgun speed acceleration from here.


Predictions have been wrong in both directions at the same time; myself, I confidently predicted in 2009 that normal people would be able to buy cars which didn't have steering wheels in… 2019.

While I'm really impressed with ChatGPT, and am one of the people who regards it as meeting my prior definition of AGI[0], I can still see its current flaws, and do wonder if this is a similar case, where the first 90% needed a major breakthrough but once that was invented anyone could do it and many wanted in on the economic opportunity… but the second 90% turned out to be just as hard, and so was the third, … and you need at least six nines[1] to really replace humans in these roles.

[0] All three letters mean different things to different people. To me: it's artificial, it's much too general to count as a narrow AI, and I count it as intelligent because the things it can do were the things I grew up thinking were signs of intelligence, like speak Latin, do algebra, and answer trivia questions, and also things I added later like 'write code' and 'pass medical and law exams'; even though it gets the answers wrong sometimes, I don't think my standard was ever "must be perfect" because nobody ever scored 100% on exams at school either.

At its best (and it's weird that it even has a best and a worst), the free version of ChatGPT has given me better code than one specific real human I've had to work with, more if you also add in the students. (And at its worst, it gives me stuff that doesn't compile and wouldn't do what I asked even if I fixed the compiler errors).

[1] Assuming a driver is making 1 decision per second that has a serious wrong answer, it would take eight nines to have just under one serious accident in a lifetime of 1-hour-each-way commutes, 5 days a week, 50 weeks a year for 40 years. I suspect the actual time between opportunities for serious mistakes is less than that, but probably at least once per minute even on an empty road.

For an LLM, I don't know exactly how good they'll really need to be, all I can guess at is that they're not going to have more than one opportunity to seriously mess up per token.


Excellent reply and I apologize for not having time to give you an equally thorough response at this time as I’m slammed. It’s such a worthy reply I apologize.

But the question I would ask you is whether you are any barrier to simply scaling up the tokens on the existing technology? I don’t. I see us at the beginning of a ladder where the only input is capacity just like when Intel was young.

While breakthroughs can change the path, the truth is we have a fairly predictable step-by-step to much more capable systems without one just by scaling the size of the hardware.

Consider this argument:

1) I mean at core the crux of our learning is that predicting the next word may be what human thinking is generally about, and how our brain works, because that’s largely the innovation here.

2) Becoming better at doing that is entirely predictable and we can scale profoundly from current levels with hardware that we are putting into production right now and that we have already invented.

3) Therefore the path to next generation capabilities is relatively (and that’s important I admit) linear.

So prior to breakthrough, we have a simple path forward to what would be at peast fairly advanced capabilities of language and media prediction and manipulation.

Now your argument about progress is right. Predicting material progress of technology breakthroughs tends to be unpredictable and inherently dangerous, but we are in an accelerating trend and most of the time (big statement here, right?) the appearance of an acceleration trend tends to extend to continuance of the trend in a certain timeframe. At least that’s been the case with the “waves” of technology breakthrough since the Industrial Revolution according. I mean to invoke Smihula's theory of waves in this argument, since I know you’ll understand that.

Those last two arguments are statistically supported and quite logical. How smart does it need to be and how statistically probable are excellent points.

As an aside…

For me, one of the areas I am focused on and thinking about a lot is self-organizing AI agents.

Having worked a lot with large scale networks, agents and task specialized networked AI systems get me excited. My brain considers it a blue ocean opportunity.

The parallels between human civilization density and current learning about density of population in demographics driving essential human progress makes me believe there will be parallels in AI. The more, the more they will self organize into network effects, and the outcome of this, like human civilization, will be high quality and rapid progress. I am not a believer in one gigantic AI, but networks of networks self organized in a way where they self-optimize around goals and outcomes, and we are really just at the beginning of exploring this direction of the technology.

The biggest limiting factor to AI technology at this point is human input and the need for human oversight.

While that oversight is definitely necessary, once AI becomes self organizing and self-creating, progress should be profound.

Anyone who doesn’t think that’s going to happen needs to understand the nature of intelligence and realize it’s just a matter of time. You can’t go down this path in a meaningful way and repress only certain aspects of digital intelligence in the long term.


Thanks :)

> But the question I would ask you is whether you are any barrier to simply scaling up the tokens on the existing technology? I don’t. I see us at the beginning of a ladder where the only input is capacity just like when Intel was young.

My expectation is that we need algorithmic improvements rather than scaling; AI can read approximately all of the internet, but current models need to actually do so just to reach the level of intern or fresh graduate. While this makes them superhuman in the breadth of skills they can perform, they need something else to improve the maximum quality in any given skill — in some cases, we can already train them on synthetic data or self-play, e.g. chess, though I don't know how broad an impact that would have.

But I do expect such algorithmic improvements, so in effect we are in agreement, if not in the details of how.

When it comes to hardware improvements, I'm not sure how that particular landscape will change over the next decade. Transistors are close enough to atomic scale they can't go on much longer, and Dennard scaling has long since stopped, but that doesn't mean nobody cares or that nobody is working on the energy efficiency. And if — just if, it isn't necessarily true — if human level intelligence needs a network with as many free parameters as there are synapses in a human brain, we're around 3-4 orders of magnitude away from that at present.

> I mean to invoke Smihula's theory of waves in this argument, since I know you’ll understand that.

Thanks, I was unfamiliar with it: https://en.wikipedia.org/wiki/Smihula_waves

> Having worked a lot with large scale networks, agents and task specialized networked AI systems get me excited. My brain considers it a blue ocean opportunity.

I think you're correct. The current zeitgeist is do-everything models, and the only blue ocean opportunities are found when you zig when everyone else is zagging, and vice-versa.

Although, be quick; if my cursory reading of Smihula's theory of waves was correct, you don't have much time before the current market reaches saturation, and moves on to the next thing.


Smihula's waves, when extrapolated, would indicate a 15-year cycle for the smartphone era (assumed to have "innovation saturated" 2007-2022), and an 8-year cycle for AI (starting 2023). That's a staggering amount of change in too short a period to adjust to.


The core reason is that nobody understands how these models work, and hence what they are/will be capable of. It's a little unnerving to think about, since the approach is pretty much "Lets build the framework, plant the seeds, and then see what comes out the other side".

If AGI is achieved, it will almost certainly be a surprise rather than be intentional.


There is something very wrong with the current dynamic of the world. Work is deeply devalued while capital has been reaping all the benefits of the increased productivity: if you doubt me, ask anyone who has made any kind of investment if they are making more money from the investment or from their regular job.

This is creating a ridiculous wealth disparity and deincentivizing a whole generation to get good with a skillset. I already heard from a lot of young people that working is not worth it, hard to disagree with them when even a basic thing like a piece of land or a house looks out of reach for a regular person.

But as you put it unsustainable things are not sustainable, society will regress until the equilibrium is found again. But things didn't need to be like this.


You're right, they didn't.

> “We can say without exaggeration that the present national ambition of the United States is unemployment. People live for quitting time, for weekends, for vacations, and for retirement; moreover, this ambition seems to be classless, as true in the executive suites as on the assembly lines. One works not because the work is necessary, valuable, useful to a desirable end, or because one loves to do it, but only to be able to quit - a condition that a saner time would regard as infernal, a condemnation.”

> - Wendell Berry


This is so spot on. We (assuming you live in the US) live in a country where the purpose is to work for as little as possible, "grind," then throw money into an index fund and have it support us the rest of our lives. Own property, while having the bottom 50% of society continually support us in the gig and service economy.

Look, I'm no Bernie Sanders, but you have to be honest about the morality of it, and the feasibility of it. I don't see the current system lasting.


As opposed to when in the past?

This linen of thought is just so lacking gratitude for the time and place you were actually born.

You basically won the lottery in the grand scheme of things but still complain.

Would it have been better to be born in 1950? Certainly not if you happen to be born in China.

How about in 1910 so you hit your 20s right as the depression hits. Or 1920 so you grow up in the depression then go fight in WW2.

How about Cambodia in 1970?

Yea life would be better if I was 6'4, strikingly handsome with a dead rich uncle that left me all his money too.


Your commentary is pretty childish.

I pointed out a structural issue of the modern world and you came up with a pointless counterpoint about being born in an unfavourable condition in the past.

I guess I'm being trolled.


technological progress to that extent is touching ethical issues.. what are we supposed to do if our existence don't depend on us learning / collaborating ?


On the other hand, in my country we have the stereotypical lazy government employee who is overpaid and doesn't work hard.

The government could then save money and provide better service for menial tasks such as "what permit do I need to do such and such"


This is a fiction. You assume the government will provide better service. Instead (from a historical evidence and performance perspective), cuts will be made, service will degrade, and those who lied to champion the changes will not be held accountable.

Who will be held accountable when these promises evaporate? My problem is not with innovation, it is with falsehoods and lack of accountability for those falsehoods.

Edit: As PheonixPharts says in another comment:

> All these jobs being "replaced by AI" are simply being eliminated with the consequences of them being eliminated ignored. Customer service jobs aren't being replaced by AI, companies, like Klarna, are just giving up on customer service and using AI to increase their perceived value rather than reducing it.

https://news.ycombinator.com/item?id=39554367

You don't need an LLM to do that. You can ignore your customers just fine without it. Cut out the performance art, go straight to zero without it. It is still mostly a powerful search engine backed by the equivalent of a knowledgeable, not a replacement for human support. If the human is not providing what is needed, that is a system failure, not a human failure. This tech augments the human, it does not replace the human.


Accountability requires power. We have stripped politicians of most of their power, leaving them fiddling with margins and fighting over petty things. "Politics is Hollywood for ugly people" they say, with a chuckle, but it's far more true than we accept.

Unless and until the executive and the legislature can seriously threaten the financiers, journalists, academics, lobbyists, judges, bureaucrats, etc. again, like FDR could, you can't expect accountability. The politicians are just the faces that implement other people's decisions, and those other people don't even have elections to lose.


> The government could then save money and provide better service for menial tasks such as "what permit do I need to do such and such"

Yes, but apply Murphy's Law. They could also automate something like appeals to eminent domain claims and make it impossible for you to fight. Imagine being told the family farm that's been passed down over 5 generations is now going to be claimed by the government and turned into a parking lot for a new "justice center."

When you go to appeal, the hyper-efficient but devoid-of-empathy AI bot just says "sorry, Dave, I'm afraid I can't do that."


It certainly looks like that from far away, but get close enough to a real permit application, and you'll see that there are so many edge cases, and every situation is so unique, that the exception always defines the rule.

I'm willing to wager that a govt. employee processing, let's say renovation permits, sees no more five applications in their entire service history that are textbook as in can be approved without any required corrections.

Extend that to any other application, and you'll quickly see the value of an experienced government employee helping you navigate the bureaucracy. If you haven't yet, then you are likely very young and/or your parents have taken care of everything for you to date. And before you blame the civil servant for the byzantine rulebooks, I want to rush to remind you that those civil servants only interpret the laws/rules. They have as much hand in creating them as you or I.


Isn't that the dream though? To be overpaid and not have to work hard?


Not if you're the employer, which, if you're the taxpayer, you are.


No you're not. They're employed by the state which is managed by the politicians you elect.

I presume you mean to imply that civil servants are paid by your taxes, but that's not true either if we're talking about a sovereign state - a moments consideration would show that spending must preceed taxation, which is true as a matter of accounting.

Really you'd do better to note that state employees allow you to get money enabling you to pay your taxes (but that's also not a very helpful way to look at it).


If you're trying to apply modern money theory, it only works at the federal level (in the USA), state and local governments are directly affected by taxation which must precede spending (or they have to issue warrants or bonds).

This comes back to bite California every time there is a major tax revenue crunch for whatever reason.


As I said, a sovereign state. (Though I did edit that clarification so you might have missed it).


How does that work with states like IL being massively in debt due to overspending and pensions?


If "in debt" means "they can keep issuing bonds and people are buying them" then they're fine.

If it means "they cannot make payments on bonds and cannot issue new ones because nobody wants them" then the Feds will have to step in, or they'll have to liquidate state assets (including privatizing various governmental functions, selling land and leasing it back, etc), or raise taxes to balance the budget. They literally cannot print money.

This cycle has already destroyed a few cities (usually the city gets swallowed by the county).

There's a step where they issue "warrants" like CA did a few times: https://taxfoundation.org/blog/california-issuing-state-warr...


It means people pay sky high tax rates, and deal with lower quality government services, and the government sells of infrastructure like their parking meters. Also, land does not appreciate and the state is handicapped in attracting new highly profitable businesses.

https://reason.com/2019/03/01/companies-should-avoid-states-...


As taxpayers, we most certainly do collectively pay for government services.


Pray show me the accounting for how that works...

Edit: here's my offer for the UK case: https://www.ucl.ac.uk/bartlett/public-purpose/publications/2...

If you want to argue something that directly contradicts that analysis, I await with anticipation.


Correction: Page 4, right here: https://fiscal.treasury.gov/files/reports-statements/mts/mts...

The US Treasury website provides a wealth of information about how the Government collects and spends its revenue.


How does that make the point you think it does? It's just the accounts of the US.

Here's a more complete analysis of the US: https://www.jstor.org/stable/43905834?seq=21


There's an economic-theory view of the world and there's an accounting view, and they don't necessarily overlap. To most people, including (I would assume) most folks here, the accounting view provides the lens through which they view these matters.


Uhm, I'm still confused. Which position are you arguing for? Perhaps you can clarify what you're saying.

Edit having read your edit: that paper doesn't have the rest of the circuit so can't explain the source of the money.


Income statements are not circular. They’re scoped to a given entity. Not sure why you would think otherwise.

If you haven't taken an accounting course already, I would highly recommend it!


The accounts you showed are not in dispute, nor that governments have receipts and outlays[1]. Your point about scoping is an important one. The discussion is about the whole circuit - from money creation to destruction. For that, you need an accounting model that covers all the entities in the circuit, including notably the money source, the bank (there's only one in the government circuit, conveniently owned by the government).

When you have the whole model, you'll see that money creation (triggered by spending) has to preceed destruction (from taxation for the most part), otherwise nothing can flow.

A critical point to take from this is to note that all the entities in the circuit need to be in balance (for the double entries to be correct). That means that for the private sector to have net savings and the foreign sector to be in surplus (i.e. a historic current account deficit), the government sector (which includes the central bank) must be in deficit.

This is super important! It means that deficits are not just not a problem, but that they are a necessary part of the system. It also shows that governments are not financially constrained (there's no limit on how big the numbers can be), so taxation is not needed for getting the money to pay for things. [2]

Steve keen has a blog post that goes through much of the accounts behind this:

https://profstevekeen.substack.com/p/money-from-nothing

[1] receipts and outlays is interesting terminology vs income and expenditure - perhaps noting that the government is not a business?

[2] however, governments very much _are_ resource constrained, so they must use their infinite buying power very wisely, otherwise inflation ensues. This shows one of the main purposes of taxation, which is to induce the private sector to provide real resources that the government can purchase.


The central bank (Federal Reserve) is intentionally decoupled from our government. (Yes, it is subject to certain political pressures but the institutional independence remains.) I’d recommend reading biographies of Alexander Hamilton to learn more.

I’m not going down the MMT rabbit hole with you; it is a waste of time.


I'm not sure I understand your objection. Why are we taxed at all, if not to pay for government expenditures?


The obvious question! Because taxing is used to free up the resources necessary for the state to purchase them. Once you force everyone to pay taxes, they have to get hold of the state currency, which should (in a well managed system) only happen through state purchase of resources, notably people's time.

That is, the ability of a state to provision itself is driven through its currency which in turn is driven through taxation.


Because something something MMT something something.


I know you're goofing on it, and tbh I give more credence to MMT than most folks do, but most folks I know who think about MMT still acknowledge that if people believe they're being taxed to pay for things, they still kinda are. MMT is a guiding principle for policy, not something to tell individuals how their relationship with the state actually works (because most folks don't care).


You make a good point, which is that even through an MMT lens, tax and spend should be the norm, since the resource provisioning and supply should balance to avoid inflationary pressure.

However, I suggest they probably should care given how much policy is guided by an incorrect understanding of the monetary system. The whole concern about deficits and sovereign "debt" is the obvious one.

In addition, a good understanding of why taxation is necessary helps to understand which taxes might be useful and which are not.

Finally (for now!), policy options open up when you understand this stuff properly that make no sense at all through the state-as-a-household view.

Politicians need to be held to account and an ignorant population is not able to do that.


I was goofing on it to a certain extent, but the central problem with MMT -- as I see it -- is that the "deficit spending doesn't cause inflation" part of it will work, until it doesn't. MMT has developed (in popularity, at least,) during a highly unusual period of economic stability and peace in the developed parts of the world. Extrapolating that experience into perpetuity is beyond foolish.

Put another way, go ask some African nation that got "bailed out" by the IMF if their "deficits didn't matter."


I'm the government. I buy your labour for $100 after tax, and you leave that in your bank account - which you have to do or you would get taxed on further spending/income which would reduce the deficit on that spending below $100.

Where's the mechanism for inflation?

It's government spending that doesn't cause a deficit that is potentially inflationary, not the deficit spending part.


> a moments consideration would show that spending must preceed taxation, which is true as a matter of accounting.

Can you elaborate on that? I can think of numerous examples from history for how governments bootstrap themselves. If your point is as simple as who pays the tax collector, the tax collector can be paid on commission, debt, or with plunder.


This is a comprehensive analysis: https://www.ucl.ac.uk/bartlett/public-purpose/publications/2...

The general case is more or less the same as the UK with only the details varying. As noted elsewhere, this doesn't apply to non sovereign states.


I think you meant 'employer'.


I did, thank you!


Sure thing.

You're still wrong on the merits, though. To pick one example, the folks who work at my local MVA ("DMV" in most states) office do not work for me, though they're paid out of my taxes and those of everyone else who earns in Maryland. They don't report to me.

Nor should they, because if they did, they would also report to that freak who drives a car plastered with QAnon garbage around Perry Hall.

My city councilman and my General Assembly representatives work for me, but the people employed to deliver services managed by the state and city government do not.


The point is that we as collective taxpayers don't want to overpay for government services. And that point is a good one.


Is it? Define 'overpay.'


Have you never bought a snack at an airport or a theme park?


Oh, boy. Just don't bring gold-fringed flags into it, okay?


For who? Certainly not for the person paying.


Yes, but that’s for us tech workers, not the common prole. /s


No, the one thing that's predictable is: no matter what they do, service will only get worse.


In the transition from pre-internet to modern times, I have found my experience with both California department of motor vehicles and the tax board has improved significantly. So much less traveling to crappy offices and waiting in hours-long lines. I still wouldn’t call it great, but it has objectively improved.


The huge increase in service has been that they let you sit on the other side of the wall, basically. Instead of telling a DMV employee what they need to hear to fill out the little computer form, now they allow you to fill out the same form, on the web, and unpaid, too!


OK, but when I had to sit at the DMV for two hours, I wasn't getting paid for that time, either. So if the options are an unpaid 5 minutes on the computer, or an unpaid 2 hours at their office, one is clearly better for me.


It actually IS true that DMV and property tax payments are pretty good in CA now.

However, I'm sure that's unacceptable to some folks and they'll soon crapify it.


Christ this is dark! Imagine a world where everything and everyone will be judged, ranked, evaluated, hired, fired, maybe even as a partner or friend by these AI's.

Unfathomably grim even if the alternative is rigid low skill bureaucrats.

I find LLM's extremely fascinating but if this is the end game i really hope AI free zones will emerge.

You can already see Gen Z being obsessed with face ranking filters, "looksmaxing" from data points and using filters day to day. It's dark.


I think these gig economy workers and algorithmic punishments already have an element of that. It’s entirely possible to do a good job of it but it appears that no one chooses to do so - I presume the same would be true for LLM based processes.


> The scariest part is that as people start to spend less time developing a skill set, and instead deferring to AI answers, you will cross a point where this problem can't be fixed (because nobody has the skills to fix it and the AI is trained on the outputs of previous generations of humans).

The loss of knowledge/skills was a key bit of Foundation which itself was a retelling of the fall of the Roman Empire.

As key skills become rarer, the price goes up.. until you can't hire for those skills at any price.


But... if the price goes up, isn't this going to attract people to that domain of skills? For skills to vanish, there has to be more factors at play, like restrictions to dissemination of knowledge, no?


Doesn't that assume that people will forever be better learners than AI?

If the "olds" learnt a skillset at some point, the data they used to learn the skill is presumably available to the AI too. Why can't the AI learn it too?

(Not talking about physical labour which clearly has way less potential to be replaced than knowledge work)


> Doesn't that assume that people will forever be better learners than AI?

Better creators, not learners. AI can't create, it can only remix what's already been produced by humans. Human progress is created, not learned. The olds who are conditioned to try new things when an existing solution doesn't work still have the capacity to create something new (wholly new, not just remixed new).


AlphaGo and AlphaStar both started out based on human training and then played against versions of themselves to go on and create new strategies in their games. Modern LLMs can't learn/experiment as far as I know in exactly the same way but that may not always be true.


Yeah, but they had a limited set of rules to work within (they were just hyper-efficient at calculating the possible outcomes relative to those rules). Humans, in theory, only have the rules they believe as there technically are no rules (it's all make-believe). For example, what was the "rule" that told people to make a wheel? There wasn't one. The human had to think about it/conceive it, which AI can't (and I'd argue never will be able to) without rules.


Reinforcement learning is a completely different strategy compared to how most LLMs work.


I'm really worried about the implications of this technology for programming, art, and literacy in general. The skills required to be a good programmer or artist are not what's being sampled by these models, merely the output. There's a real danger here of losing these skills if they can no longer be developed professionally, and that even means no more human training data for newer and better models to be trained on. We'd be stuck with whatever trash current models are putting out.


The competency pipeline issue predates the threat of AI, I do agree that AI will make it much worse as it’s hard to see a payoff on investment into skills of AI is going to replace you anyway. I feel like a wise old sage knowing things that new generations will never learn and may even be lost to humanity.


As a programmer, I hope programming remains "sacred", but I can see the flip side: It's a specific way of making a machine do what you want, repeatedly. I can imagine the AI-fuelled world just entering the machine's capabilities/parameters, and asking an LLM in human language to generate code for it. Sure it might not do it in the most elegant or efficient way, but many programmers also use npm.


In this AI-fuelled world where everything is run by 2nd rate AI-generated code, talented programmers will be able to hack the whole world.

Forget working for anyone, just make the machines do what you want.


I kind of flipped out, and was motivated to finish this blog piece, when I saw this: “UK government wants to use computers to cut civil service jobs: Yes, you read that right.” The idea — to have citizen data might be put into and processed by a computer — is hideously toxic and broken; and usefully reveals the kind of thinking that makes morally crippled leaders all across our system love this technology.


Dealing with an AI will be much better than dealing with a civil servant, in most cases. There is a certain kind of person who becomes a civil servant and many of them will not only have a hostile attitude, but also make it their life's mission to try to do as much damage as possible in the pettiest ways possible to the people they "serve". Especially if you're of a sex, ethnicity or age group that they hate. Sometimes the same as theirs.

Letting citizens deal with their bureaucratic errands with an online form or portal instead of with a civil servant in their office has been an enormous benefit, in the places that offer this. An AI will fuck things up, being an AI, but it will not necessarily treat people with a hostile attitude and lie to clients to spite them. Unless it's programmed by civil servants, that is.


> Especially if you're of a sex, ethnicity or age group that they hate. Sometimes the same as theirs.

Well, Gemini just proved that if you're white, you're the ethnicity the AI is told to hate.

That's the thing, because it can be programmed by humans means at some point, it will be abused to do something nefarious. And because it only knows what humans tell it about reality, it will always "think" within the context it's been given (never in the abstract).


The future was predicted in The Machine Stops and Mockingbird


I'd argue more of a mix of Brave New World, 1984, and Atlas Shrugged


that'd be WALL-E


I was thinking Idiocracy. Time to use AI to generate enough blogspam about the benefits of watering plants with gatorade to "poison the well" of datasets used for future training.


It’s your well? That sounds like it’ll just result in Gatorade replacing water for crops even sooner.


I like to think we'd snap out of it eventually (like in WALL-E)


> And this will not just be in government, it will be everywhere. The scariest part is that as people start to spend less time developing a skill set, and instead deferring to AI answers, you will cross a point where this problem can't be fixed (because nobody has the skills to fix it and the AI is trained on the outputs of previous generations of humans).

I think that would require AI development to approximately halt at close to the current level for over a lifetime.

Conditional on development halting, I'd agree with you. By analogy, there's this single, very useful, very powerful, set of "hidden methods that can be used to win all games, get rich, find love, determine the limits of thought itself!" — mathematics[0]. Do people like learning it? They do not. Calculator much easier. What a calculator does is none of that, calculators are merely arithmetic, but most people can't tell the difference between mathematics and arithmetic.

I think LLMs have the same effect on anything that can be expressed in words, and all the various image generator models have this effect on graphical arts. One must be extremely motivated to get past the "but the computer is better than me" hump.

However, I don't expect AI development to even approximately halt at anything close to the current level. There's a lot of room for self-play in domains like maths and computing where the proofs can be verified, and probably a lot of room for anything that can be RLHF'd, too. And that's also assuming we don't get any brain uploads; regardless of the question of "is such an upload of a human capable of consciousness", which absolutely matters, it may still be relevant to the economic issues of AI depending on the cost of running one depending on all the details of such an upload that I can't even begin to guess at at this point (last I heard, https://openworm.org was not actually measuring synaptic weights directly, but rather neural activity? I may be out of date, not my field).

Whatever happens, however good it does or doesn't get, I do expect something to go very weird before I reach the current state pension age — close enough that, if that something is "the machines break" or "society breaks", then there will still be plenty who remember the before times.

[0] https://www.smbc-comics.com/comic/secrets-2


> I think that would require AI development to approximately halt at close to the current level for over a lifetime.

What I'm getting at isn't AI development halting, but human knowledge/creativity halting [1]. Because the AI is and can only be trained on human knowledge, it's knowledge of reality has an upper bound (whereas, theoretically, humans can know anything or make new discoveries that don't exist in our current knowledge set).

If you don't tell the AI that strawberries are a thing/reality, it will never conceive of a strawberry on its own. And arguably, it's not fair to call these things "AI" until they can do so.

[1] Charlie Munger said "show me the incentives and I'll show you the outcome." Well, in this case, the incentives to use AI > spending the time to learn and develop skills. The outcome here is clear: humans will stop producing new knowledge and by extension, the AI will stop receiving new knowledge to learn.


> Because the AI is and can only be trained on human knowledge, it's knowledge of reality has an upper bound (whereas, theoretically, humans can know anything or make new discoveries that don't exist in our current knowledge set).

AI do not have such a limitation; they can be trained on anything, including to design and then perform their own experiments in laboratories, e.g. this one: https://engineering.cmu.edu/news-events/news/2023/12/20-ai-c...

> If you don't tell the AI that strawberries are a thing/reality, it will never conceive of a strawberry on its own. And arguably, it's not fair to call these things "AI" until they can do so.

I have far too many projects on my plate right now, but that is kinda one of them, has been since… wow, only December, these last few months have felt like years: https://benwheatley.github.io/blog/2023/12/12.html


lol. This is how you end up with that scene in idiocracy where you find out you get your college degrees at Costco.


I normally think the entire market is overpriced, which I mostly still do, but I'm not convinced that tech stocks are overpriced, at least the ones he referred to. Nvidia has a forward P/E of 32, which is in line with Microsoft (35), Apple (28), Intel (31). Compare to KO (Coca-Cola) which is 22, which gives the 33% tech premium that one of his linked articles notes. But KO is going to grow at the rate of global growth (3 - 5%); I think it is not unreasonable that all of those companies are going to grow at least 33% more than KO. So I don't think there is a bubble in major tech stocks. It is possible that Nvidia will not retain its current sales over the long term, but given that Nvidia cannot satisfy current demand, that seems unlikely in the next couple of years. Training the future models isn't likely to require less compute, and I think there is a reasonable case to be made that people will need domain-specific training for domain-specific ChatGPTs (or whatever the future is). Which means more training.

Yes, I think it is way overhyped, but on the other hand, actual people are using ChatGPTs. I've used it for simple code to get started with an unfamiliar (but popular) library. I talked with a non-technical friend recently who was using it for relationship advice (with predictably unhelpful responses, it can't tell you the issues you are unaware of, but still).

If there's an AI bubble, it's in the early stages. In my mind the over-priced aspect of the market is the complete denial that stock prices at 5% interest rates should not be higher than at 1%, all things being equal. At least not if value = profit / costOfCapital as it is supposed to.


Nvidia's margins are currently 75%. I dont see any way they can possibly keep that up for more than a few years. As soon as a legitimate competitor shows up those margins will be cut in half if not more. Coca cola on the other hand has a very predictable business, with profits extremely unlikely to decline anytime soon.


As far as I can tell, 75% is gross margins, and 75% is fairly standard for gross margins. Net profit margins are only 48%, although that is still rather high. I would expect more around 30%, which it looks like their historical net margin has been, so that's a good point. I think it will be hard for competitors in the near-term, unless AMD has something close to ready to go. My understanding is that it takes about 4 years to design a CPU through to manufacturing; GPUs might be quicker since I think they are simpler. I think the current margins are due to higher prices because of demand vs supply. I guess the question is whether demand decreases, and if so, does it decrease first or does a viable competitor arrive first? And even if a viable competitor arrives, does it affect demand? (AMD CPUs have always been cheaper than Intel's, but until recently it did not affect Intel's margins because Intel always had the better process and therefore the performance edge.)


After this last run up in stocks despite high interest rates I've completely given up on trying to value based on any metrics.


I gave up on that because the valuation formulae all divide by a small number with large uncertainty. Instead I use dividend yield, P/E, P/B, and P/S, all relative to historical values for that company.


Younger people who may not know: Tim Bray was one of the creators of XML. Also a nice guy on Twitter (at least he was a few years ago) he's probably active here too.


> Younger people who may not know: Tim Bray was one of the creators of XML.

Also editor of the JSON RFCs:

* https://en.wikipedia.org/wiki/Tim_Bray#JSON

* https://datatracker.ietf.org/person/tbray@textuality.com


Tim Bray is still a nice guy on Mastodon https://mastodon.me.uk/@timbray@cosocial.ca

[Edited - added Tim's name to help future searchers]


His blog posts are one of the main things that keep me coming back here. Insightful, and he often is able to put into words things about the industry that I can only feel in the periphery of my heart.


While I agreed with a lot in this post, I'm also pretty wary of the underlying, unstated idea that average investors can avoid bubbles popping while also somehow taking advantage when things go up - that is, he doesn't really say it in so many words, but he's essentially talking about timing the market.

I started my tech career around the turn of the century, and made the mistake of putting a ton of money (at least for me, at the time) into Global Crossing. My thought was that while there were all these "fluffy" doomed dot coms at the time, Global Crossing had billions in real, physical infrastructure they built. Obviously I didn't quite understand debt at the time, never mind the actual fraud that Global Crossing committed (I remember thinking "Wow, stocks really can go to zero and never come back.")

Sure, you could argue I made every newbie investor mistake in the book, but the worse consequence for me was that it "spooked" me early in my investing career, such that I became very reticent to want to invest in things when I felt they were overvalued. E.g. I was one of those people who thought there was a giant tech bubble when Facebook bought Instagram for a billion dollars - in 2012...

So sure, you may think I'm an idiot, but I can quite guarantee I was far from alone. It was only at the point where I really, truly believed "I'm definitely not smarter than anyone else in the market" (and hardly anyone is) that I just put my money in index funds, did regular rebalancing, and otherwise forgot about it.

We may be in an AI bubble, we may not, but I've seen way too many "vastly overvalued" companies continue to be "vastly overvalued" for over a decade (and then only briefly coming down before shooting back up again) to think that Tim Bray has any special insight here.


My knee-jerk reaction is to point out that "even if you bought the S&P at the height of the dotcom bubble, the annualized return over the past 24 years was ~5.5% (not including dividends)"

And while I think this line of thinking is still more correct than not, I wonder how much I (and a lot of other folks in the US) are discounting the possibility of a prolonged period without growth.

Despite shocks like in 2000 and 2008, the S&P has spent very little time "underwater" over the past 50 years. But that's not the case if you look at something like the Nikkei, which took until this year to get back to its 1990 peak.


Whether or not they're discounting the possibility of a prolonged period with little growth, the fundamental issue is that this is essentially unknowable, at least to your average investor. The 2 issues I see with this line of thinking (i.e. comparing it to the Nikkei):

1. You wouldn't want to dump 100% of your money in an S&P 500 index fund. There is a reason to diversify.

2. The point of dollar cost averaging is essentially to reduce the risk of dumping all of your money in (or out) at a bad time. Taking your Nikkei example, I'd be curious to see if you looked at, say, investing the same amount of money on the first of the month over a 2 or 3 year period. The amount of time you'd be under water over the past 4 decades would be much less than just looking at any single instance in time.


I don't have monthly data, but as an approximation here's a rough test where you make a one time investment of $1000, either all at once, or equally spaced over 2 or 5 years. This is simulated starting at each year from 1985 to 2005, and we count the number of years underwater starting 5 years after the first year (after $1000 has been put in for all 3 "strategies") up until 2023.

  S&P:
         once  dca_2yr  dca_5yr
  count  25.0     25.0     25.0
  mean    0.7      0.8      0.7
  std     1.8      1.5      1.1
  min     0.0      0.0      0.0
  max     8.0      6.0      3.0

  Nikkei:
         once  dca_2yr  dca_5yr
  count  25.0     25.0     25.0
  mean   11.4     11.5     11.8
  std     9.1      9.5      9.4
  min     0.0      0.0      0.0
  max    29.0     29.0     28.0
So investing at once, the max number of years underwater for the S&P was 8, versus 3 when "dca"ing over 5 years. The average number of years underwater (averaged over when you would've invested) is quite low, while for the Nikkei all metrics look much worse.


Thanks! Curious, how/where did you get the data for this analysis? Also, don't know if you included dividend reinvestment - that has a huge overall impact.


https://www.macrotrends.net/2593/nikkei-225-index-historical...

I did not - adding an optimistic 2% dividend didn't change much for the S&P, but slightly reduced the underwater counts for the Nikkei (max 23, average ~7.5)


@2: Research on the topic seems to disagree with you. You're not taking less risk, you're just trading one risk for another.

Vanguard Research actually wrote a paper about this called 'Dollar-cost averaging just means taking risk later' [0].

Or if you would like more recent research the paper 'Dollar Cost Averaging v.s. Lump Sum Investing' by Ben Felix [1] is worth a read imho.

0: https://www.passiveinvestingaustralia.com/wp-content/uploads...

1: https://www.pwlcapital.com/wp-content/uploads/2020/07/Dollar...

Edit: Formatting


I had the exact same experience with Bitcoin back around 2014. Only now just getting back into investing as I realise it's an important thing to do for wealth preservation.

I suppose it's not uncommon for people to have this kind of experience, so I'm just glad I had it young.


Bitcoin != investing.

I say this as someone who lost money in the 12DailyPro / eGold fiasco of 2005/2006. (read: I am very, very dumb.)


> Bitcoin != investing.

Investing in Bitcoin = investing, though.


It was only at the point where I really, truly believed "I'm definitely not smarter than anyone else in the market" (and hardly anyone is) that I just put my money in index funds, did regular rebalancing, and otherwise forgot about it.

This is the way.


I had this experience with 3D printing. In 2010 Google/Meta were transitioning to Mobile. It was not at all clear at the time that Mobile would turn them into multi-trillion dollar behemoths. To have bet the farm on FANG at the time would have been extreme.

Dropping money into an index fund is generally the right way of going about things. My suspicion is that those who talk about clearing N million on NVidia big bets either had enough money that they could go long with 1MM on a single stock with Y thousands or just got very lucky in their first trading experiences.

If someone gets 1/100 luck three times in a row - then they can easily get to 1-10MM portfolios from a ~10k starting point. You'd expect around 1 in one million traders to do this.


I thought that $10 billion valuation of Facebook in 2010 was crazy and sold of my Bitcoins around 2011-2012 because of similar sentiments. And since around the same time I've been reading stories about how tech is in a bubble on front page of HN.


Notice that all these bubble posts never make the following claim:

“The price of X index fund/asset/real estate will be lower at future date Y than today”.


Can you explain what you mean by rebalancing in the context of index funds?


You chose a portfolio distribution, 75% NASDAQ 25% S&P.

After 1 year, you look at your portfolio, and because of market movements, your portfolio is now 81% NASDAQ and 19% S&P.

So you sell some NASDAQ and buy some S&P to rebalance to 75% / 25%.

Rebalancing can be any mix of securities or assets (or both). You decide how you want your wealth distributed, and you rebalance to stay within those levels.


Suppose you have an asset allocation strategy that is 25% US stocks, 25% international stocks, 25% real estate (REITs), 25% commodities (I'm not suggesting you do this, but this was the allocation in Roger Gibson's famous multi-asset allocation strategy paper - google it). To implement this you would want to:

1. Choose 4 different funds to represent each of those classes (e.g. an S&P 500 index fund, an MSCI EAFE fund, etc.). You want to be sure to reinvest dividends.

2. On a specific time period (i.e. once a quarter) you rebalance your portfolio - if anything has gone above 25%, you sell it so that you can buy anything that has fallen below 25%.

Many investment platforms let you essentially do this automatically these days.


Not OP. They probably invest in more than one fund, and want to keep the ratio balanced. Eg: 60% MSCI World, 20% Emerging Markets, 10% Tbonds and 10% Gold.

Every few months they would check the ratio and “rebalance”




Over unencrypted HTTP, the server responds well: http://www.tbray.org/ongoing/When/202x/2024/02/25/Money-AI-B...


PSA: publish your static blog content on a CDN, not on a $5 VPS


If it's static it doesn't (shouldn't) matter. I had over a million hits on my $3/mo VPS and it handled the load perfectly fine.


How many times do we have to see a blog post hugged to death by HN before you will change your mind? We literally saw it happen with OP just now.


My point is that it's typically a design choice when choosing something like Wordpress or a dynamic site, versus a static HTML file that's under 100kb. Though in this case the site's resources are under 400KB, so I can't really be sure.


Nobody's saying a static site has to be under 100kb or whatever. You can have a static site that has 100Mb+ of assets. You can put whatever design you want on a static site, it doesn't have to be minimalist and without images.


Absent GenAI, do the BigTechs of the world have enough growth going on elsewhere to appease Wall St volcano gods sufficiently?

They all seem to be hyping GenAI a ridiculous amount, prompting this question. And it makes sense for them to ride the hype train and get something out of it. But it also makes me wonder if that only makes the eventual drop even larger.


As ever the truth is in the middle.

- LLM's provide functionality that was very difficult to implement until 2 years ago. - We can decode natural language statements relatively well and relatively easily. - We have an approximate common sense knowledge base. - We can encode statements into human readable text flexibly. (this was never so much of a problem as the first two - but it's still useful).

But, these are not magic boxes that can tell our fortunes.

So we can do good things if we engineer things well, and there is a lot of synergy with other AI tech that's been evolving in the last ten years. STT and object recognition are both very useful, end to end differentiable reasoners are coming in now as well. ML was becoming important in 2019, 2023 created an inflection and some hysteria, but there's substantial value to be had.


I mostly agree but I have a few quibbles with some arguments.

For instance, Bray considers the adage "The CIO is the last to know". From the 90s until now, developers have always snuck new technology in without management approval. You put Apache on a forgotten Linux box in the corner because it's easy and fun, and a few months later the whole company relies on it. Developers are not rushing to deploy skunkworks generative AI solutions, so, the argument goes, probably generative AI isn't that good.

There's a couple of problems with this.

1. Not everything that is good can be deployed skunkworks-style.

It might be that AI is only really good with incredibly high up-front costs and extremely specialized developers. Like launching a satellite. You can't do it yourself with stuff you have lying around, and even if you had the money to do it you probably don't have the expertise to do it safely. But it's still extremely valuable!

2. Sometimes we are using this technology to hack up solutions to personal problems!

I had a video which I wanted my hearing-impaired father to watch. I could have paid a human or AI-powered service to generate subtitles, but I found that I could do it myself with OpenAI's Whisper, on an old laptop, and then munging text files together in the usual way. I was a little shocked that this worked offline. I could have done it on a plane. This absolutely fits into a hacker workflow.


Yeah, it’s like everyone forgot about Whisper because of ChatGPT. A friend had 200 hours of audio of a retiring expert naturalist’s park tours and wanted to publish them so the knowledge wouldn’t be lost. I turned them it all into incredibly accurate text files overnight using just my M1 laptop, for free. That’s crazy.


Also effective object recognition & OCR.

I got asked about onboarding someone to a fund online and it occurred to me to check out current services before responding - 20 seconds with google document AI made it clear to me that things have moved decisively in the last couple of years.


> I kind of flipped out, and was motivated to finish this blog piece, when I saw this: “UK government wants to use AI to cut civil service jobs: Yes, you read that right.” The idea — to have citizen input processed and responded to by an LLM —

That's not what the article says, it's about processing responses not responding to people. I don't think there's anything about responding to citizens.

And it also doesn't say LLM it says AI.



I know. It doesn't talk about responding to citizens as far as I can tell, can you quote where it does?


The UK gov is probably talking about using embeddings to respond to FOI requests.

1000000 documents -> 1000000000 embeddings Citizen question -> GPT normalised question -> 'embeddings match 'embeddings <-> embeddings recover document fragments with matched embeddings use GPT to create answer from document fragments

The question is how successfully this process creates the answers required. Who knows? But, I would not be surprised if it worked pretty well and it might boost productivity to the point where there's a massive saving to be had.

Maybe - but that's the fun!


I doubt it.

The article mentions generating answers based on sources like Hansard, and dealing with large numbers of consultation responses. Frankly it's shocking they haven't done any ai stuff with consultation responses, lots of freeform text is where you usually want to start doing clustering and analysis.


Are you saying that you don't think that embedding retrieval would work well for this kind of problem or are you saying that you doubt that the UK gov will be doing this kind of thing.

BTW - I used to do lots of clustering and analysis on freeform text, but first SVD and then embeddings have just changed the game so much that it's the starting point now.


That the UK gov will be doing that, certainly from the descriptions in the article - though perhaps I've misunderstood what you meant.

If it's "backend tool to more easily find relevant sources" rather than what I originally thought which was "automated FOI responses with minimal/no human oversight" then I could see that more.

The examples in the article were a backend tool for searching hansard and one for what sounds like clustering and summarising consultation responses. I think the latter was misunderstood by the author of the submission to mean responding to citizens rather than processing citizen responses.

> BTW - I used to do lots of clustering and analysis on freeform text, but first SVD and then embeddings have just changed the game so much that it's the starting point now.

I agree, also with things like naming clusters and summarising what a cluster is about is a huge improvement IMO to results.


Aswath Damodaran's Nvidia analsysis: Assuming 32% CAGR rate of over 5 years with 40% target operating margin at the end of period Nvidia is now 40% overvalued. https://aswathdamodaran.blogspot.com/2024/02/the-seven-samur...


Logical yet, how overvalued did Tesla get?


As the old saying goes, the market can stay irrational longer than you can stay liquid. So shorting a clearly overvalued stock can be a dangerous move.


Oops. We posted that at the same time.

Short Nvidia only if you hedge with some call options. Or other financial instrument that mitigates your losses.


In a world where seed and A stage AI startups are getting $100mm rounds and half that money's going to NVIDIA ... eh it's probably not too overvalued. I think there's more oomph left in the bubble.


When the valuation is something like "it looks like it's assuming they'll continue to sell $X billion a year" it can be reasonablish.

When the valuation requires "they will continue to grow sales X% a year" is when it quickly becomes impossible, and for a much smaller X than you might realize.


Even if you do think AI is a bubble, Nvidia's capacity is completely booked for a few years or so if I'm not mistaken. And that's with huge margins.


the market can stay irrational longer than you can stay solvent.


> Last summer, when I valued Nvidia in this post, I found it over valued at a price of $450, and sold half my holdings, choosing to hold the other half. Now that the price has hit $680, I plan to repeat that process, and sell half of my remaining holdings.

Well, it looks like he lost a lot of money


He bought it before me at least, so he *made* money. Doesn't matter if didn't predicted the top. He saw a stock that he thought will increase in value, bought some and sold at a higher price. That's what it matters


So if someone buys a stock and then that stocks goes bankrupt. By that logic they lost no money...

Taking money off the table when it is enough for you is the way you guarantee that you make any money...


yeah, but he could have waited a little bit to see signs of a slowdown instead of taking such a radical contrarian view and leaving ~80% upside on the table.

better to risk it going down from $450 to $400 before selling than to miss out on the ride to $800


I hope in both instances the author at least held out the extra couple of weeks until earnings day.


Nope.

Failure to maximize returns is not losing money.


There's a big difference between failing to maximize returns and missing out on an 80-90% upside (so far). Selling at $450 was a pretty bad call


You can't lose what you never had.


Aswath Damodaran looks at valuations with a very one-dimensional lens because he is not an innovator, engineer, or even a business person. Ever since Amazon was in its early days, he has said that Amazon was overvalued and he has always been wrong because Amazon has always found new verticals to build and create more value with.


Damodaran makes detailed analysis with assumptions in the open. Put your numbers to the Exel sheet and calculate valuations using your own numbers.

>Ever since Amazon was in its early days, he has said that Amazon was overvalued and he has always been wrong because Amazon has always found new verticals to build and create more value with.

Amazon has been overvalued multiple times.

Amazon stock had negative return 10 years between 1999 - 2009.


> Amazon stock had negative return 10 years between 1999 - 2009.

It did not. You will have to cherry pick very specific days in this time frame (top of the dot com bubble and bottom of the GFC) to get negative returns. But how about 1998-2008 or 2000-2010?

Here is how $10K invested in AMZN performed in 10 years [1]:

1998 - 2008 $102,134

1999 - 2009 $25,124

2000 - 2010 $23,625

2001 - 2011 $111,229

[1] https://www.portfoliovisualizer.com/backtest-portfolio?s=y&s...


https://dqydj.com/stock-return-calculator/

Jan 5, 1999 to Dec 28, 2009, AMZN had an 8.9% annual return.

Jan 5, 1999 to Dec 28, 2008 was -0.52% annual return.

Jan 5, 2000 to Dec 28, 2009 was 6.88% annual return.

But why give a crap about returns during a specific 10 year period? Almost nobody is buying something today to liquidate all of it at a single point in time in the future.


This is seriously might be the worst post I have ever read online.

To say it is clueless would be too nice.


Is there a resource I can use for understanding how "dumb money" impacts the markets?


lots of articles talking about "index investing/passive investing market impacts" should get you what you are looking for.


Just because some technology may end up changing the world, it does not necessarily mean that it is a good investment:

* https://en.wikipedia.org/wiki/Technological_Revolutions_and_...

* Via Ben Felix: https://www.pwlcapital.com/investing-technological-revolutio...


> Produce plausible output

That's basically art. So AI's only really good at producing art. So we're safe... but now I feel bad for the artists.


Depends on your audience. One of the things that stuck in my mind from my sculpture degree was a tutor's contention that plenty of (visual) artists produce "things that look like art" but aren't worthy of the name. Subjective, yes, but I knew exactly what she meant. The current generation of AI by definition only produces entirely derivative works. All artists are influenced to some extent by their visual history, whether they like it or not, but there are leaps that some of them make which I don't think anything which works solely on previously-generated work could make, and those tend to be the most interesting. (Although I'd agree that artists who really lean into derivation, like Andy Warhol or Hans Haacke, are also interesting).


> but there are just way more ways for things to go wrong than right in the immediate future

That is always the case so I wouldn’t over index on that.


With the socialization of risk there is no good reason to not try to participate in bubbles--as longs you're not the first one rolled.


> As bad as 2008? Nobody knows, but it wouldn’t surprise me.

The recency of 2008 has really warped people's brains. 2008 was the 2nd worst financial crisis of all time (maybe it would have been the worst if our fiscal and monetary tools were still at 1929 levels of sophistication). You should be extremely hesitant to declare that anything will even come close to it.


Plus, that was not a stock bubble, that was a bad loans bubble. Real people borrowed trillions of dollars they then couldn't pay back. This is a completely different phenomenon.

Not saying this current bubble won't pop eventually, but the scale would be on a completely different order of magnitude. It's not likely that 170-year-old banks will collapse and be sold off for pennies on the dollar (Lehman Brothers) if this bubble ends.


There's an argument that the AI tech bubble is just a continuation from the value-add of automating everything and the productivity/GDP-increase that causes. It is just that investors are so skittish that they'll only let the floodgates open if its "sexy", so AI comes in and restarts the money machine.


Is that why profitable companies like Google are laying off thousands of workers?


While simultaneously achieving record revenue/profit?


What else are investors going to do, sell stocks and buy bonds?


I think most people (including many working in AI!) would agree that AI is currently at the peak of the hype cycle and there will be a bloodletting at some point.

But I don't really understand how AI being hyped, and NVIDIA's stock being overvalued by extension, could result in a 2008-like market crash.


Tech stocks are responsible for most of the gains in stock market in past year. Nvidia is alone responsible for 28%...

If that just doesn't seem unsustainable I don't think what is... AI is not pulling up traditional firms just a very small number of tech stocks.

So with how much is concentrated on a few tech stocks, downturn in tech could lead to significant correction.


>Tech stocks are responsible for most of the gains in stock market in past year.

Ok, but none of the major tech companies other than Nvidia are AI companies. Sure, some of the pop to MSFT's stock is probably because of the OpenAI deal, AWS, GCP, and Azure is riding some of the wave of new AI investment money coming in, but none of them are first and foremost AI companies selling AI.


A stock market correction, sure. The market ebbs and flows, though. What happened in 2008 was a catastrophe that caused a multiyear worldwide recession. It's called the Global Financial Crisis for a reason.


> I dunno if anyone will build an AGI in my lifetime, but I am confident that the task would remain beyond reach without the functions offered by today’s generative models.

This. LLMs are not the path to AGI. At best they’re one of many ingredients.


> Things have been too good for too long in InvestorWorld

That's not by accident and it's been at the expense of non-investor world for a long time.

What we're seeing is finance capitalism suck surplus value from every piece of the Earth it can. We're still burning more fossil fuels than ever before [0] despite the now visible risk of climate catastrophe. I believe we're likely to see investors continue to become irrationally wealthy while increasing larger and larger pools of people a driven into ever increasing situations of hardship.

I see no signs of the madness stopping until both human and planetary resources start to buckle under the pressure and refuse to give yields they once did. The article repeatedly mentions crypto as though it were an obvious bubble, but bitcoin is near record highs and even Sam Altman's bizarre world coin is at extreme record highs, COIN is up 200% in the past year.

The bubble won't "burst", it's just that increasingly less people will be invited in.

0. https://ourworldindata.org/fossil-fuels


At this point crypto clearly isn't a bubble. And why would it be? Cryptographically secured money has some advantages over fiat money. This doesn't mean fiat will disappear and all crypto will succeed. But most fiat currencies haven't really "succeeded" either.

Money is a very "efficient" market in this sense because each individual can decide do you want to hold currency X or currency Y or currency Z? And there are pros and cons to each that lead to price discovery between currencies. And I think as time goes on it is becoming more and more apparent that many central banks do not take the management of their countries currency seriously and so people choose alternatives. This is what well-functioning markets look like. Consumer choice and incentives.


> At this point crypto clearly isn't a bubble. And why would it be? Cryptographically secured money has some advantages over fiat money.

We've had crypto for how many years? More than a decade, yes?

Can it be used as replacement for money yet? Anywhere on the planet? Does it look like it ever will?


Yeah, can't imagine the gigatons of CO2 needed to produce CPU/GPU chips (and other hardware) as well as to run these machines, and for what, so some people have to work less generating a document or applying effects to a movie, or to answer a customer's FAQ, or so you can concentrate less while driving...

It really is like that paperclip game where at the beginning you can click once to get 1 paperclip and at the end you're using the resources of a galaxy to generate gazillion paperclips, except on this planet the number we're all irrationally obsessed with wanting to make go up is our bank account total.


It's obvious that bubbles exist in retrospect, but determining whether current growth and valuations are sustainable in the present is incredibly difficult. As another poster mentioned, we are essentially talking about market timing here.

Most investors have been conditioned by many popular talking heads to immediately dismiss the idea of successful market timing - and for the most part, the talking heads are correct. For the average investor, successful market timing is nearly impossible.

However, we have many counter-examples of successful market timers over the long term. James Simons' Medallion fund has returned 50%+ CAGR over a multi-decade period and stomping the market, creating many centimillionaires and billionaires in the process.

I set out thinking, what's so different about Simons and his crew at RenTec? Why is it so difficult for their success to be replicated? Not one to easily back down from a challenge, I began working on my own algorithms to successfully hedge against market downturns and provide superior absolute and risk-adjusted returns compared to the S&P 500. While I haven't yet seen Simons-level success in live trading, since launching Grizzly Bulls (https://grizzlybulls.com) in January 2022, 6 of our 7 models have outperformed the market on an unleveraged basis:

SPX (benchmark): +7%

VIX-TA-Macro-MP Extreme: +39.98%

VIX-TA-Macro Advanced: +34.38%

VIX-TA Advanced: +12.92%

VIX Advanced: +9.91%

Vix Basic: +5.76%

TA - Mean Reversion: +15.46%

TA - Trend: +12.97%

Of course two years of outperformance also doesn't yet stand the test of time of Simons' remarkable run, but I'm confident that we've discovered alpha here.


This is great, I am also running my own AI investment robots and this is the future and I believe you can absolutely beat the market and even thinking to use similar approach to other structured data sets and create a startup around the idea...

“If you don't find a way to make money while you sleep, you will work until you die.” ― Warren Buffett


> since launching Grizzly Bulls (https://grizzlybulls.com) in January 2022, 6 of our 7 models have outperformed the market on an unleveraged basis:

> VIX-TA-Macro Advanced: +34.38%

I'm not sure how to reconcile that with the numbers shown on https://grizzlybulls.com/models/vix-ta-macro-advanced

When 2022 is selected as the starting year, it shows an increase from 16,911,242 to 17,881,510 which is an increase of 5.7% in 26 months.


ah, sorry for the confusion. That's because the last 20 trades are excluded from the table/chart if you don't have the appropriate access level to view, in this case a Gold membership. That means when you are looking at the chart / table with starting year 2022 you are only seeing the trades up to 3/24/2023 instead of the present.


Right, I didn't notice the end date, sorry. So that was ~5.7% return for ~15 months (not 26) and the remaining ~28.7% was in the last 11 months.


Exactly, but if you compare to the market, even though 5.7% absolute returns is a poor 15 month performance, much of the model's relative outperformance came from those 15 months where at the same point, the SPX was still highly negative


If Tim Bray weren't a renowned progressive / socialist, I would have listened to him about Bubbles and Investing. But he comes from a very liberal skeptical engineer and is always wrong. (note the use of right-wing used for Tax cuts. is Tax breaks that extreme of a position? It is for socialists / progressives).

Anyway, here's to think about AI. Intelligence is the most precious commodity of all of humanity. Intelligence captured in bits is easily distributed and scaled.

We pay $1000 / hour for intelligent agents and it's easy to see, how a super intelligent system can capture 50% margins on that. (Volume of Data and Proprietary hooks will make switching difficult).

But, wait, there is more.

Intelligence begets more Intelligence. Every artifact an AI produces, we need more AI to maintain, enhance, distribute it. So, for the first time we have an entity whose demand creates it's own demand spiraling into a vicious positive growth rate.

All this means is our AI demand may at 0.000001% of what the demand in 20 years would look like, which makes AI enablers incredibly cheap. I could be wrong, but to dismiss the possibility is exactly what's wrong with today's "skeptical liberal (with a decel mindset) engineer's" framework / vision.

Build your own mental model


I've seen it before when i started to learn Korean. It was helpful for me


> Given that, why do I still think that the flood of money being thrown at this tech is dumb, and that most of it will be lost? Partly just because of that flood. When financial decision makers throw loads of money at things they don’t understand, lots of it is always lost.

This is a common take, it feels like a not-cynical-actually-smart counter to the hype train.

I think that take is missing something -- that this is how capitalism pays for fast learning.

You have a space entirely unexploited, you give million pioneering fortune seekers shovels and ignite exploration across the whole unexplored surface area. Most will quickly discover they're unsuited to exploring, or picking at dirt with there's nothing to find here, some will find fools gold and labor over it until they realize it won't buy land, and a couple will strike oil instead of gold and build an entirely new economy generating unfathomable wealth.

On the whole, no money was "lost", even without mentioning overcoming costs of delay since this got the exploration done the fastest.

I'm not saying this is more efficient than centrally planned 5 year programs (though in practice it probably is), but it does seem more effective at learning a new thing fast ...

... and getting from “exploration to exploitation” the fastest.


To summarize what I think the author is trying to say with this article:

1) The stock market is in a bubble due to a decade of low interest rates and tax slashing by “right wing” governments.

2) Big tech in particular has been doing well but this is not sustainable.

3) AI is in a bubble. People are pinning their hopes on it to keep tech and I presume big tech growing.

4) A bunch of references to academic papers from 2000 about why AI is hard.

5) Gen AI requires a lot of compute which generates a lot of carbon and is bad for the environment.

Thus his statement: “ I think I’m probably going to lose quite a lot of money in the next year or two. It’s partly AI’s fault, but not mostly. ”

Which I disagree with. Because A) I think in the long term (5+ years) the investment in AI will be a positive ROI. B) if the stock market crashes in the short term it’s likely going to be for non AI reasons. 3) His arguments as to why AI isn’t going to pan out long term are a bit weak.

Having lived in the Bay Area for over 13 year's, I’ve seen a few cycles: social, mobile, cloud, gig economy etc.

The cycle pattern is always the same: a) a big new exciting tech idea comes along. b) investors pile in money. c) 95% or more of the companies they invest in go bust and if the space has legs some companies do really well.

How is this any different with the current wave of AI companies?

Today the big winners in AI are the incumbents, some examples:

Microsoft: is making money being the hyperscaler of choice for AI companies (on prem ChatGPT, mistral, etc), it’s co pilot lines and enterprise subscription products.

Nvidia is making bank being the current standard on which all of these companies run their models. They have some recent competition from Groq but are still likely going to be crushing it for the next year or two. Mainly due to precommits from the hyperscaleralers.

Meta: seem to have been able to leverage AI to claw back advertising revenue due to Apples crack down by improving targeting.

As someone who has raised venture capital to do an AI startup I’d say yes there is a lot of hype in this space. Yes a lot of these startups are going to go out of business but it’s also early days.

I also think working AI into this poorly written article about how the stock market is going to crash is a bit of stretch.

I’m concerned about a market crash myself but I am more worried about it being caused by a combo of a) the upcoming US election. B) the war in the Ukraine. C) conflict with Iran. D) interest rates in the USA being high.


Tim writes a post about a "Money Bubble". There is now an alternative form of money that anyone can buy through their 401k if you believe there is a money bubble. He dismisses?(or doesn't even consider it?) since it uses a blockchain, because that's a dirty word in the cloud/SaaS tech circles. sigh


Money buys things. Cryptocurrency does not. Some may use it as a store of value but it’s not money any more than gold or silver is.


I'm not sure if this argument makes sense today. You can buy many things using cryptocurrency, from food to cars to houses.


Other than a few token places that you might see on the news, crypto is not really used as money. And never will be. Transaction fees.


This is not really true in my experience, crypto is more than just Bitcoin. There are a number of options that do not have high transaction fees and that are use fairly often to trade goods.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: