Hacker News new | past | comments | ask | show | jobs | submit login
My AI Timelines Have Sped Up (Again) (alexirpan.com)
50 points by hcarlens 8 months ago | hide | past | favorite | 95 comments



I'm no AI expert, I don't have a lot of domain knowledge but what irks me about reading through this entire blog post is the idea that we are hinging on a definition of AGI that apparently accounts for 95% (!) of "economically valuable work" while basically not talking _at all_ about the trades. No mention of construction, manual labour, mechanics, welders, painters, nothing!

Is the author trying to say that all that work falls into the 5% left over after you replace the stock brokers, programmers, artists, etc.?

I understand no offence is meant, but this feels extremely naive. At best the work programmers do is in the minority of "economical valuable" work that we strongly rely on for our world economies to keep chugging along.

Quick and dirty searches show that some 12% of employed people are in healthcare in the USA, 9% in leisure and hospitality, and 2% in education. 23% of people in the USA are employed in economically valuable work that is very personal, and very difficult for technology to fully replace.

Sorry if I'm rambling, I think I just did not enjoy the complete lack of mention of blue-collar workers here. I don't think I'm yet in the camp betting on AGI coming out of the woodworks in the next few decades.


I totally agree with this - it always seems like sort of a weird omission when people talk about AGI doing 95% of work without any discussion of robotics. It seems very obvious that you need bipedal robots (or a lot of different types of robots) to hit that number. Do these AGI predictions assume that we'll have those by the time they hit AGI? Or is the word "knowledge" supposed to be implied before the work "work"? Either way, it's clearly inaccurate to say that AI on its own can do all productive work without physical embodiment.


If we have intelligent AI that can automate programming, then making really good robots will not be a problem. While not trivial, actuators and power systems are not the reason why we don’t have robots that can do all manual labor for us. Software is the reason, and the same kind of software that’s learning to code (machine learning) can also be adapted to washing dishes, folding clothing, doing craft labor or previously human manufacturing jobs.

Accelerating programming and information jobs also means accelerating the creation of robots that can do these trade jobs


Totally - but is the AI development of robots included in the timeline for what you're calling AGI? Can we get to AGI without having those robots, and then the AGI designs them?

I think they'll ultimately go hand in hand - this is more just a question of what we're defining AGI to be and whether robotics should be mandatory as part of the stated definition around doing 95% of work.


The AGI will have to run the robots, problem solving when things go wrong is what allows humans to run large organized endeavors without getting stuck, you need AGI to do that for generalized work. Before AGI robots will only be able to handle tasks with very simple error scenarios, and will still need humans to look after them for the rare cases where things go more wrong.


I'm not sure that the physical embodiment needs to be robotic in nature. I'm a physically abled home owner and need to repair my hvac or plumbing. Instead of paying for someone in the trades to come out and do the work for me why can't I just put on a pair of augmented reality glasses and have the AGI use me? The camera and my voice inputs should allow the AGI to diagnose the issue, order needed parts, and then instruct me using visual overlays and audio prompts on completing the repair.


Even in that very optimistic vision of the future:

* You don't have the tools or supplies on hand (just let that pipe keep flooding your basement until an amazon delivery arrives!), and many of them are expensive and sufficiently specialized that a homeowner might only need them 2-3 times in a lifetime.

* You don't have any training for using the tools if you had them.

* You might not be physically capable of doing the task even if you had the tools and new how to use them, you might easily injure yourself (or others) or make the problem worse, etc.

Think of it like youtube - for most household repair tasks, you can probably already find a youtube video of someone explaining how to do the fix. Do roofers and plumbers and electricians still exist, even though a youtube video can show anyone how to do most of those tasks? AGI glasses seem like they might make less of an impact for most homeowners than youtube has.


That's not optimism, its pessimism. Some wealth company building an app to sell to DIYers that puts trade people out of work all in order to make money seems unlikely to you?

* Every time I've encountered a home owner with a burst pipe or flooding scenario their immediate problem was lack of knowledge on they different ways to shut off the water supply. And if the problem is something as simple as lack of a wrench then you could always go get one faster than an emergency plumber could get onsite.

* The AGI glasses don't need to solve 100% of problems to be disruptive to the skilled trades. No reason they can't refer you to a human specialist. An (outsourced) human could join your AR glasses session (for a cost) and see if they can provide more instruction to both you and the AGI or if referral to a local human was needed. A Google supplied augmented reality AGI could charge users for specialized assistants AND then charge human service providers for higher placement in the referral listing.

* Local tool rental is a common thing, no reason hardware stores couldn't function as a same day fulfilment instead of amazon. And a lot of tools are cheaper to buy even if they only get used once than paying for a single service call.

* Re: tool training - I'm sure the AGI could recommend approaches to avoid tooling where possible. For example recommending shark bite connectors instead of brazing pipes, etc. But yeah, some people just won't have the dexterity to use a hammer. The issue of being able to plaster or finish drywall nicely that vidarh pointed out in another post is also a good example. It might be a boon to the local AR/AGI enhanced handman.

* Agree on roofers and electricians... where the possibility of self harm is high these scenarios are too risky without basic training. Maybe you'd have to get a DIY electricity safety license to unlock that ability in the AGI. Or maybe only apprentice electricians can use the electrical AGI, the result would still be less need for master electricians.

* Re: YouTube - As a DIY homeowner I love youtube but having to wade through hundreds of videos until I find the one video that applies to my scenario is exhausting. An AGI would already be trained on much of that knowledge. It'd lower the barrier. Again it doesn't eliminate the need for the trades but it does disrupt them further than youtube already has.

Think of it more like adding a self-checkout line to the trades. It helps in a lot of scenarios, the casher job role doesn't go away but a lot of people are unemployed or making less money all of a sudden.


I just think the impact of such a device would be on the order of "a marginally better youtube for DIYers" rather than "put all the trade workers out of business." I'm not even convinced a lot of the more recent AI work will result in a net productivity gain, vs the productivity drag associated with increased spam and other AI-generated garbage. Will ChatGPT-like software manage to displace more workers than, say, mail-merge running on a 1980s computer? I'm not sure I'd take that bet. Self-checkout lines are a good analogy, in that they've been around for over 20 years now, and they're still a somewhat marginal productivity improvement over regular cashiers.


>why can't I just put on a pair of augmented reality glasses and have the AGI use me?

[meme time]

Oh god, oh fuck, we're going full Manna.

[/meme time]

https://marshallbrain.com/manna1

For those have have not read Manna, this is a huge portion of the premise of the story that AI helmets turn us into automatons that allow the AI to further automate our jobs until we are all unemployed and destitute.


Thanks - I should have worded that better. I'd like an AGI that helped to augment my abilities, not control me. But its still an AGI replacing a trade person, having a master plumber standing over my shoulder was more of what I was thinking of. The AGI having the knowledge is a form of control I guess but it seems acceptable to me.

I hadn't seen manna before, thanks for sharing. There are some good thoughts there. The bit about girls liking Manna because it doesn't hit on them was excellent. The dad character had some good points too.

The examples do seem a bit over the top (likely to drive home the premise). But honestly the verbosity seemed excellent for a new hire scenario. As long as it dynamically scaled down the verbosity of the instructions to match what was needed by each individual user would it really be awful if the managers got replaced?

It is a fine line, the tech being used to add to our knowledge vs making us dumb robots. If it could be kept to teaching, certain types of management, and gamified checklists it wouldn't be the downfall of the human race. However I could easily see it getting to the point where customers are talking to the AGI mounted on your head instead of you and that is scary.


Honestly Manna is a story about AI way before it's time, because it captures that most of the battle we see now, it isn't going to be about AI at all. But corporate datamining in order to dehumanize you.

I mean, this is OpenAI today "By using our product you are training our product to be you, please insert $$$ to avoid this scenario"


>physically abled home owner

I think you nailed part of the problem here. For no less that 30% of the population the first step in the googles would be. Wear googles and go to gym for at least 6 months, along with diet guidance. "You are about to eat a snickers bar, this will delay your air conditioner repair by 16hrs"

I once tried to show a white collar friend how to spray for bugs himself, he needed a break after hand pumping the chemical sprayer. Im pretty sure in the end he just paid someone anyway.


Having just done a half-assed plastering job, I will answer that with "because a lot trades needs practiced skills, not just the knowledge".

Sure, it'll be one more thing that eats into bits and pieces of what we need trained labour for, but it won't eliminate it.


I think the first roadblock to this method is the grey area for liability. I feel like a lot of the value to qualifying people for trade work is less about getting the work done and more about maintaining a chain of accountability. The AR tech can absolutely be uses, and I think is currently used in some places, but the person using it will likely still need to be an employee of a company willing to accept 100% of the risk. I wonder what we'd have to do to get around that

Some examples:

Something goes wrong even though you did what it told you to do, but it didn't account for something specific to your situation, but the fine print says you were supposed to provide info on any non-standard circumstances, but you aren't qualified to know what needs to be said.

Or it asks you to do something surprisingly difficult that you don't know is outside your limit until you're in the thick of it - like holding something down with a lot of pressure and your hand slips causing damage or injury.

Or you do the whole thing and everything seems fine, and then later it breaks and causes damage/injury, the cause is linked back to your work so now fault needs to be assigned/split between you and the AI.


I guess at this point it's just a matter of the semantics of what it means to say AGI is doing the work. My feeling is that it's not accurate to say it's doing the work if there's still a layer of human meat sack (not you personally!) interpreting and taking the actual action.


That sounds more expensive than paying a tradesman, except you're on the hook for buying or renting all the tools, and when something needs four or more hands, can you email support and have them upload an assistant AGI to carry around the heavy materials?


One good robot arm that a below minimum wage gig worker carries around.

edit: he can also turn the crank to keep it powered maybe


Or the minimum wage gig worker is the robot. If we ever get neural interfaces working well maybe the AGI could control our motor functions. Maybe not full control, I don't think most people would be comfortable with that but think of shooting a gun. If the worker points the gun in the right general direction and the AGI add some fine tuning to turn them into an expert marksman...what army wouldn't turn all their conscripts into marksman? Basically the aim assist bots in video games but for real world.

It could be used to let gig workers perform surgery, play in an orchestra, be actors in a movie.


robot arm is cheaper


I agree with you, however when talking about jobs that are relegated to the physical world like the trades, I dont think it safe to assume that robotics wont also benefit from the AI advancements. Thereby eventually encroaching on those jobs as well.

The following is a recent video from the youtuber "Two Minute Papers" about some researchers (Clemens Schwarke, Victor Klemm et al.) who have created a highly mobile autonomous robot running software trained via simulated reinforcement learning models.

https://www.youtube.com/watch?v=Nnpm-rJfFjQ

i think it demonstrates the potential for the kind of motile finesse that could be used by robots to complete trades-like tasks.


If you stuck an AGI inside a Boston Dynamics robot and trained it on a trade I suspect it could replace much of that trade. https://www.youtube.com/watch?v=-e1_QhJ1EhQ

Not an easy problem, but the assumption is that AGI makes solving such problems tractable.

Japan has been working on healthcare robots for decades.

I think the key word is that the robots "could" replace 95% of economically valuable work, not that they will. Using an example from today: we can and have replaced fast food order takers with kiosks, but there are still a lot of people employed as fast food order takers because that's what customers prefer.


Yes, but...

There are unknown issues down that road. Given 1000 years AI could probably replace all work, but I think we’re all on the same page of “could AI replace 95% of work within a timespan that’s meaningful to people alive today?”, and in that term the realization of an AI-powered machine doing all manual labour (even an AGI one) still has a too many unknowns for a meaningful answer.

We were “going to have flying cars in X” for a long time, but it turned out that building a flying car was not actually the problem that needed to be solved in order to make them viable.


It is also not necessarily what people want.

I do think social mobility from being able to work is something people want. I think AI researchers want to feel important and like they have purpose. As they start to realize that will all vanish once they invent AGI, it will be interesting to see how they proceed.

I think human augmentation is far more attractive and will be a more popular field of endevour, if we workout how to get better at it.


Isn't this what Tesla is trying to achieve with Optimus? They were just demonstrating fine grained control using "AI" training to perform human scale tasks. The big idea is that they are human size so they can be slotted in for any task that is currently performed by people without needing any special accommodations.

I personally don't think they'll be economically viable in the foreseeable future, but they do represent a possible bridge to ubiquitous robot labor.


I agree that it doesn't account for blue collar workers, but it wouldn't surprise me if the balance, purely in terms of financials, was actually that unbalanced.

I'm not saying it's correct or fair that a trader sat at his desk can contribute £20 million to a country's "output", while a builder, working his backside off could maybe add £100k, but that's the world we live in...


If you're a builder manager you can make more, but if you look at residential housing, the people at the bottom aren't making shit. Mostly it is a poorly regulated, non-unionized, highly dependent on under the table pay industry.

But anyway about it, if there is suddenly an influx of manual/physical labor poured into the market, labor prices will drop far below the living cost and all hell will eventually break loose.


> I'm no AI expert, I don't have a lot of domain knowledge but what irks me about reading through this entire blog post is the idea that we are hinging on a definition of AGI that apparently accounts for 95% (!) of "economically valuable work" while basically not talking _at all_ about the trades. No mention of construction, manual labour, mechanics, welders, painters, nothing!

A good chunk of the value in the "trades" is in the experience and in the management.

If I can make anyone a painter by giving them a headset [1] and having an "AGI" model tell them exactly how, where, what to paint, how much of the value is the model collecting and how much of it is left for the human being?

[1] https://www.ptc.com/en/resources/augmented-reality/infograph...


The belief is that once the AGI is here, the possibilities are endless for disrupting absolutely every aspect of our life and society, the end of human history is what I've hear Open AI folks say.

There will be zero work, humans will be completely useless economically and we'll just have to hope that some how, all of this wealth is fairly distributed and we don't lose our minds from feeling completely useless.

What you can't imagine happening now, will be possible by deploying the AGI. Things we can't even dream of will be possible. It will be all chaotic and completely unpredictable over night. Eternal life ? No problems, pickup the phone and place your order.

What's appealing about this? I don't know, but billions and upon billions are being poured into it and many "smart" people are striving for it.


We don't know what will be at the end of this. It seems that the OpenAI folks are endlessly pursuing the technology of AGI without actually working on making sure that AGI benefit "all of humanity." They haven't even articulate what it means to benefit all of humanity.


They think they'll live in a super-utopia and the rest of us will simply die away. I hope we can start to face the REALITY of this change now instead of these nonsense salves like "more types of jobs will be created" or "UBI will work" because that's not how biological beings given unlimited power actually behave. They crush their biological competition as swiftly as possible. That's what will happen. The folks at OpenAI think they'll be on the winning side and know if they're not, they will be destroyed by a competitor's God Machine and die like the rest of us. Their zealotry is just fancy self-preservation at the cost of all the other humans.


OpenAI is a cult. I dont quite understand why they are given so much airtime other than wanting to depress tech wages and drive traffic. Their products dont work as advertised and are built on theft.


Everyones products/lives are built on theft. I mean you paid for that alphabet right, and understanding those forces of physics?


This argument is so 2023.


Damn singularity accelerating timelines and meme-cycling.


I am taking their mission statement seriously even though I am seriously skeptical about AGI. But they don't take their mission statement with any seriousness.


I have no doubt their mission is serious. It's dead serious. That's why they are willing to steal without a single drop of concern for the people's who's lifes they are affecting. And have no issue gaslighting people into thinking that's ok. "Because AI learns just like humans". At the end of the day, after decades of works all they have to show for a procedural bullshit generator. I'd be desperate to show progress too. Still, I wouldn't steal.


That's what's so insane about it all :)


> What's appealing about this? I don't know, but billions and upon billions are being poured into it and many "smart" people are striving for it.

Men's way to "give birth"?


I predict AI will make virtual reality finally take off, and it will open new worlds. I don’t really know what that will mean for humanity, culture, and the economy.


> There will be zero work, humans will be completely useless economically and we'll just have to hope that some how, all of this wealth is fairly distributed and we don't lose our minds from feeling completely useless.

That's delusional on a disturbing level. It's as if those people have no clue about the real world, or are willingly lying. All the wealth we have now, is not fairly distributed, and just having more to distribute won't change this. Unless they speculate on putting enough stress on the system to break it, and think it will lead to something better, because...?


Most of this assumes on the idea that "AI" is controllable and containable?


I imagine 5% could easily be work where you want an actual human being, not because they are better than AI, but because they are a human being, and that is what you are wired to want.

I'm pretty sure 95% in that definition would include trade workers.

5% could be something like teachers, babysitters, because you want children to have that human to human connection.

I think it could be thought of as "95%+", meaning even if computer is more intelligent, we'll still want humans because they are humans for some things, because we, ourselves are humans and that's what we want.

Things like healthcare should be fully automated, except for a small percentage again. You have a health problem, you go in, you don't need that human connection unless you have a very specific need, some of the times.


I am more concerned about people abusing AI technology to generate spam, fake, and trick people, resulting in an almost entirely useless internet and polluted information space.

Maybe it's for the best that we start meeting each other in meatspace.


"reading through this entire blog post" ?

Is

"The Physically Embodied Bear" "95%+ will include taking physical, real-world actions."

not a mention of "construction, manual labour, mechanics, welders, painters" and "blue-collar workers" ?


The paragraph was my exact problem, hundreds of words but the entire world of AI interacting with the world physically is boiled down into an assumption that "by the time we get there, we'll get there". It just doesn't seem realistic? Especially with predictions/hunches like the ones found in the article.

Others seem to mention it is a sort of feedback look, once we get to 50% AGI I guess that AI itself can help us overcome the physical limitations. I'm sceptical that it is as simple as that, but at the same time can see it as very much possible.

How much do current AIs contribute to their own development? Likely quite a bit, but then again I don't know any AI researchers to ask.


>How much do current AIs contribute to their own development?

Really a pretty fair amount these days as AIs can contribute to their own training data. In addition they've got pretty good at measuring their own success. In the past for example, if you had a robot hammer hitting a nail, you'd have a person judging the quality of the strike, but now you can face a few cameras at it and self judge far better. Yea, you still need some people around, but the amount required has dropped a lot.


The more technology can do, the less the workers do, the less workers get paid. Being a servant to people with money is better than being part of their furniture though.


On the other hand , things should get cheaper and more accessible ?


Minimum wages are always going to be defined by scraping by. Definitionally that's just where they get pushed both in terms of wages and prices. But yes, removing skilled labor and replacing it with unskilled labor does mean more perks for the wealthy.


I guess this assumes "AI" is a containable entity ?

I mean, why can I not get my SuperSmart5000 robot to self improve and then use it's intelligence make my situation better?

Personally, I think this is why open source AI efforts are so important. If we're going to get wiped out by AI, I hope it's an open source AI that does it.


because you have no particular resources to leverage


> No mention of construction, manual labour, mechanics, welders, painters, nothing!

I have the strange feeling that it will be those jobs that will be automated further. There is no need for manual workers in construction - you can have bots build houses, and bots painting them inside. Mechanics are like to be phased out by EVs, there's not even a need to manually change wheels and tires. Everything they do can be automated, and will be automated.


I think the idea of most is that at the time we have AGI, we will robots designing robots that will ‘fix’ AI for the trades.


Ah, yes, that second most famous AI operator after the tensor, the handwaver.


Sure, but that’s what the ‘believers’ usually say.


I thought that it was assumed the blue collar work was believed to be attainable much sooner than AGI, so then only the intellectually complex work would remain for AGI to solve.


Yeah by that definition we already eliminated 95% of economically valuable work from 1700s. Very few people do farm work nowadays.


He clarifies later that he is including physical labor (robots).


I'm glad an AI researcher is posting their personal thoughts. As a fellow researcher and developer in the AI space, I conceptualize AGI as a self-domain-learning entity that is able to manage itself and understand how to affect/intervene its actions and effectual consequences (humans don't reach this on a general level until early teens to mid twenties, if ever). Walking is a solid example of this at a specific level. Singleton objective functions and patterns across probabilities of states are insufficient -- there needs to be ongoing interpretation and architecture extrapolation.

In short -- current AI approaches are great at interpolation and still perform poorly at most extrapolation exercises that a human can take on.


My AI timelines says next AI winter will come in 10 years, when people find it's a vaporware ... again.


It has supposedly been coming for the last 20 years reality is the cat is out of the bag and it won't be stuffed back in.

Even at today levels of AI a very large amount of work can be optimized if the domain experts knew how to use the ai tools. As people become more used to using them I don't see an ai winter coming.


Even if we had no new developments in AI, the effects of the current SotA would be massive over the next decade. There are myriad tools already built on or incorporating AI to deliver real value.

Ignoring the fact that wild science fiction has become totally mundane within the last 10 years is putting your head in the sand.


I saw an ad on the London Underground recently from a brewery that included "AI." It's starting to feel a lot like the dotcom/crypto bubble.


When the market is on everyone’s mind, it’s time to sell.

Joseph P. Kennedy, 1929 (paraphrased)


Headline: "AI Winter shouted from the rooftops for the millionth time this year, meanwhile for the weather showing balmy 85 degree temperatures for the next few years"


AI isn’t vaporware, but in my view there is a massive AGI bubble right now.

Everyone is attributing high level reasoning and intelligence to LLMs and quite frankly there is little evidence for this being the case.

There is almost certainly an AGI winter coming.


Disney and Pixar animation are about to get 10,000x - 100,000x cheaper and accessible to nearly anyone.

When applied to the correct domains, AI is incredible.


What good does that do though ? It’ll wipe out entire careers so you can have what ? Cheap cartons ?

Not disagreeing, just curious what makes you think this is so incredible ? Did you not have enough to watch before ?


I want to make. Due to opportunity cost, capital cost, personnel cost - I could never be in that seat.

This will empower fan fiction authors, web comic authors, indie creators, et al. to all reach new heights that they couldn't before.


You won't be making, you'll be generating whatever the model allows you to generate based on copyright restrictions?


To actually think this requires nothing but raw ignorance of the creative process.


You can't draw that conclusion from what I said at all. These are tools.

Good web comics and fan fiction require creativity. They're low cost, but high effort endeavors that are achievable by solo creators.

Now there's a path appearing that will give this same class of people the tools to reach new creative heights that most could never achieve by themselves.

This is a good thing. It's a tool. You still have to be an artist to use it effectively.


Believing AI tools are going to give people anything approaching Pixar capabilities demonstrates your ignorance perfectly.


I'm definitely not ignorant, because I work directly in this field. The things happening in academia, within my company, and with competitors will blow your mind.

Let's circle back on this soon.


“Work” to us, in the tech bubble is sitting at a desk and doing some typing and clicking and sometimes decision making. The world out there is much more different. Work for many people is physical or very human (like a teacher or psychologist).

Yes it’s easy to guess the job the involves mostly moving information around would be automated but that’s not all jobs by any means.


Imagine you learn about some new conjecture in mathematics, and you try to predict when it will be resolved. Your prediction says that there is a 50% chance that the conjecture is resolved in the next 25 years, and a 90% chance it is resolved in the next 50 years. So, conditional on the conjecture not being resolved in the next 25 years, there is an 80% chance the conjecture is resolved in the subsequent 25 years. Wouldn't that be a little strange?


Title should be: " My AI Timelines haven't sped up, but now I think there's a 10% chance of AI dominance in 5 years"

His AI timelines haven't sped up, only clarified


It’s also barely clarified and has a ton of false precision problems, which I’d expect a researcher who works in the field not to have. Why 10% vs 20% or 15%, no real reason, just vibes, but it gets clicks.


At some level, isn’t any prediction about the future that involves too many variables somewhat always going to come down to vibes when trying to create a percentage confidence level?

Is there a way to actually mathematically quantify this better? When people make predictions with percentages like this, I understand what they’re trying to communicate, even if the math isn’t precise. There’s a reason all his percentages are in clean multiples of 5.


There’s no reason to use numbers then. Use words and don’t hide behind sounding precise and mathematical


Maybe for you. I actually do think something meaningful is communicated when someone says “I’m about 60% confident in this” versus saying “I’m fairly confident in this.”

I don’t take it as him trying to put on a veneer of precision on what he’s saying. In fact it feels kinda obtuse to assume that’s what he’s doing.


He can "feel the AGI".


It's right there talking to birds sent to spy on us.


I'm an "overhang" bear, while incremental progress will be made - it'll come at too high of a cost. The larger, slower, more expensive models have limited applications(mostly fine-tuning the economical ones) so there will be ultimately less investment. Unless there is a new unlock, we'll be left with mediocre 3.5 models for most real-time apps and excitement will fade.


ChatGPT is one giant training data acquisition project. Including pro. it’s painfully obvious when you see that any useful usecase eventually moves to the API (GPT 4 web/app flat out refuses to code more than a function or two at a time now; it had no issues with multiple files before).

Everyone is providing the greatest training set you can now find (since all other venues for training data will gradually disappear behind walls and fees)


It is not quite 70 years since this proposal was written:

http://www-formal.stanford.edu/jmc/history/dartmouth/dartmou...


“Today is the worst AI will ever be.” I’m gonna steal this one. I can’t say the same about web development, for example, or the US political landscape, or climate change, or mental health, etc.


AI doesn't just make progress, during an AI winter the AI gets worse for a while. Like deep blue was the best in the world, then they scrapped that project and it took a long while until anyone made a better chess engine. The same can easily happen for LLMs etc, if it turns out they are too expensive to run without VC money we will see them quickly regress once the next winter hits.


Mine was 50/50 at 2035 after GPT-2, and with GPT-4 it is more like 90/10 with 50/50 being sometime in 2028 (that's for pure intellectual work).


Is really a sped up?

The target of 95% is still 2070, the 50% is still 2045, means the progress between 2035 and 2045 is now less than before.

So the timeline has sped up and slowed down at the same time


Yes.... what the military needs most is an auto-complete tool that was trained on Reddit material. (GPT-4)


He should take into account in 3 months the time line will accelerate 20% and compound it and do a discount to present and give a more accurate answer.

Let me do his job for him: 90% AGI will happen by 2030.


For fck's sake, please stop using this font. I know Apple started the trend but even they went back to sane alternatives. The narrow lines of this font are a torture to the eyes.


I think this is just plain Helvetica, with weight at 300 rather than 400.

Apple used to use Helvetica, and even an ultra thin version of it, but I wouldn't say they started the trend of using Helvetica.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: