Having worked in the space on-and-off since my PhD, the pace is not "overwhelming". You just started paying attention to it.
ChatGPT and Stable Diffusion are good, but they're incremental gains over what came before, which were incremental gains over what came before that, and so on...this has been happening for a long time now. If you ask me, the "amazing" part of ChatGPT isn't the output, it's the query understanding. The output is still (very pretty) garbage >10% of the time, and that's the danger zone...the Level 4 of autonomous thinking.
At the risk of saying something that makes me look dumb in ten years, this moment feels a lot like the "Level 5 self-driving will make driving obsolete by 2020!" panic we had circa 2015. Lots of investors and tech enthusiasts told me how stupid and shortsighted I was back then, when I said that we hadn't made as much progress as they thought we made, and how steep that remaining curve would be.
Yes thank you, I find myself in the funny position of going from what would popularly be called AI optimist to AI pessimist over a short time without changing any of my views. The fact that we've found a new mode of ML that's 95% impressive doesn't solve the problem with the other 5% which is the same thing that's happened with the other modes. We're all just still in the honeymoon phase.
I feel bad in a sense for the serious ML people working in this area, because there was never even a real period to reflect about how cool it is before having to be the grownups in the room and push back on absurd expectations.
Any expectation that we're going to see mass unemployment, or entire career paths made obsolete, is based on extremely inflated expectations of what LLMs could maybe do in the future.
It has a very similar feeling to the mid-2010s claims that Blockchain would displace bank settlements/SWIFT, completely change the supply chain, and make governments change how land/property ownership was tracked.
Same was true for "deep learning" 4-5 years ago. There's a demand for consulting companies to come out with "future of work" reports about how technology will impact jobs. Everyone knows it's flying-car style futurism and not serious, but the media picks it up as reality. Last cycle, there were all kinds of reports on jobs that deep learning would make obsolete, truck drivers and radiologitst being the flagship cases. This time it's something else. It almost feels comforting to know your job is one of the predicted obsolete ones, because of how wrong these things are.
I’m seeing at least applied NLP positions going up in smoke… research is probably still going to be done, but as far as business is concerned, GPT ate NLP whole and didn’t even spit a single bone out.
That's maybe a bad comparison because blockchain was practically useful to no one whereas chatgpt, copilot, et al are legitimately starting to become useful tools.
I don’t know. Progress may be linear for you but for mere mortals the jump from ‘useless but interesting toy’ to ‘indeed capable bullshit generator’ to ‘holy shit this thing run in a loop does my work for me’ was anything but linear.
And it turns out that the really valuable part (the ability to converse) doesn't even need that much data. That has been the most surprising part to me.
Langchain or just feed it what it generates manually. I particularly like the ai-as-ceo publicity stunts or attempts at hooking it up to a bug tracker to manage the work and then execute it… no links on mobile but should be easy to google
I'm working on building a CEO-aaS for fun, more than profit, though that too... Something mere mortals (not langchain experts) can just input maybe some competitor info, api docs, and benefits/features and then the CEO bot basically can help begin building a company around the tech/offering/business idea etc.... Whether it's a SaaS or a marketing agency, or landscaping business -- it'll give you the best items to work on to expand your business so you can just focus on that, and not have to think about what to do to build your business.
It’s kind of wild to hear someone say this and it leads me to believe your expertise is outside the world of transformers.
In 2014, we were training encoder/decoder models on maybe a billion tokens, mostly limited by models architected around time steps.
Today, we are training encoder only models on trillions of tokens, mostly hindered by eminently solvable stability problems (i.e. can we make enough parallel compute for our infinitely parallel models).
Maybe that 10 years of progress feels the same as going from LSTM (1997-ish) to decoder attention (2014-ish) over 20 or so years, but it doesn’t to me.
Not only not in the world of transformers but I find it unlikely it's in modern ML at all. Deepmind has been putting out shocking work in Deep RL which is as impressive if not as hyped as anything from OpenAI.
GPT-2 was four years ago. The pace has indeed been enourmous. The fact that the general public has only been paying attention recently really doesn't mean much
Meh. The difference between GPT-2 and GPT-4 is not as revolutionary as you're making it seem. The ability to process images and text as input is impressive, and also an evolutionary combination of things that were happening separately before.
I'm sorry but I find this take to be absurd. Across every metric GPT3 was a massive update past GPT2 and GPT4 soundly beats GPT3 by similar margins. Beyond OpenAI, AlphaGo was a massive milestone, people thought it would take decades until AI go was at GM levels, AlphaZero blitzed AlphaGo and MuZero blitzes both of them. AlphaFold was also a massive breakthough in protein folding.
Idk what your "PhD" was in but everyone I know in the field is struggling to keep up with the pace of research so unless you are working on "Expert Systems" and trying to convince everyone that the past 10 years of DNN progress has been irrelevant I don't understand how you can have this point of view.
> AlphaFold was also a massive breakthough in protein folding.
My PhD was applying simpler ML models to problems in structural biology (this was a while ago, so transformers and other large neural models didn't exist then), including protein structure prediction. Most recently I worked in applying generative models and neural network classifiers to problems in drug discovery. I have also worked in IR (search) and NLP, but again with older technologies.
The AlphaFold example is apt: yes, it outperformed methods at CASP. It was indeed a dramatic leap relative to those methods...which didn't work very well. The claims from laypeople that "protein folding is a solved problem" is total bunk, and yet we see that constantly. If you don't know the field, you will be fundamentally misled by statements like "breakthrough".
For other areas (e.g. NLP), the definition of "progress" is also well quantified, and is not accurately captured by the breathless hype that surrounds this technology. Are these large transformers a dramatic improvement? Absolutely. No question. Are they the end of white-collar labor? No. That's ridiculous. Has this been an evolutionary change over the last decade or so? Not a smooth curve, but yes. The world didn't suddenly change overnight.
> Idk what your "PhD" was in but everyone I know in the field is struggling to keep up with the pace of research so unless you are working on "Expert Systems" and trying to convince everyone that the past 10 years of DNN progress has been irrelevant
Except, I'm not trying to convince anyone of that. I'm saying that these models are evolutionary, progress in research is always a form of punctuated equilibrium, and picking random points of progress across fields and extrapolating to the infinite, glorious future doesn't work well. That's what people are doing today.
> You're making a different argument to what you started the thread with.
No, I'm not. The field has been making gradual improvement for a long time, with transformers (ca 2017) being a notable jump forward. Research is always a form of punctuated equilibrium.
> Starting with AlexNet there have been numerous discontinuous improvements in CV, NLP, Game Playing, etc. It's just not true to say otherwise.
Nor did I. I said that these models have been advancing gradually for a long time. The fact that particular steps can be characterized as "jumps" when viewed in isolation is not a rebuttal.
Yours is also just an opinion. But having a baseline understanding of what came before is important to having an informed opinion. To that end: GPT is not the first tech to combine "NLP" into a single model. For example, a survey from 2019 [1].
You linked a paper that came months after the release of GPT-2. It is also a modified transformer, the basis of most LLMs today and an architecture that was released only two years prior.
and this is without mentioning that NLP is far from the only things LLMs do/excel at.
Frankly, nothing you have said so far has made me think your opinion is more informed than mine.
No. I linked to a blog article which mentions many different papers, many of which were from 2018 or before. ELMO, BERT, UNILM...all examples of multi-target NLP models that pre-dated GPT.
I didn't dig this up because I thought the date on the webpage was some kind of rebuttal. I picked the first article I saw that mentioned models I knew about, with dates.
> It is also a modified transformer
Well, yes. Exactly. We're in the echo of that work, which was the core advancement that happened circa 2017. As I keep saying, it only seems revolutionary if you haven't been paying attention to the last 5 years.
This 'revolution' is quite fuelled by clever marketing and hype generated by Microsoft attempting to overthrow Google crown for better search. Unfortunately, that use-case has fallen flat on its face.
With hallucinations, bullshitting and regurgitating falsehoods and it continues to be unable to reason transparently and like all neural networks are just black-boxes with limited explanation of it's decisions, still requiring humans to review the output just like 'AI systems' still requiring humans to sit behind the wheel of a so-called robo-taxi 'AI' driver. Sounds like typical AI snake-oil.
The only 'safe' use-case for LLMs or even GPT-3 is 'summarization'. Everything else is complete hype; devoid of any reason to even justify the euphoric mania of one section of 'AI'.
I really doubt you understand LLMs at all if you think all they are good for is summarization. You will be shouting at your wall while the world changes outside.
> I really doubt you understand LLMs at all if you think all they are good for is summarization.
A 'counter claim' without giving examples of refuting mine can be dismissed without any further explanation, since my point still remains un-refuted.
Summarization of existing text is the only safe killer use-case for LLMs than the rest of the so-called 'applications' and the hype of applying it to everything, where it makes little sense other than attempting to jump on another hype train.
I predict that the majority of these ChatGPT-enabled startups will be closing down in the long run, due to no product market fit, easily running out of money or another bigger company over-taking them.
There are samples all around, including it being able to solve newly generated logical puzzles. There was quite a bit about variations of river crossing.
>A 'counter claim' without giving examples of refuting mine can be dismissed without any further explanation, since my point still remains un-refuted.
What point, that LLMs are good at nothing other than summarization? That's not a claim arrived at via logic or evidence, it's a poorly thought out opinion that happens to be wrong. There's nothing to refute.
In popular consciousness, there seems to exist this blind optimism of “it will only get better/faster/etc from here”.
Which may be true, but as we saw with semiconductors, there’s always going to be an asymptote. And we don’t really know where that is until we hit it. Are we in 1970 semiconductor territory with LLMs, or are we in 2020 semiconductor territory? Ultimately, arguing about where we will be in ten years is somewhat futile.
I agree with you that the "amazing" part is the query understanding.
There are two really big things I'm excited about:
1. It really understands what I am asking. What my meaning is. Gpt-4 is way way better even than gpt-3.5 at this. This is something truly different that a computer has never been able to do and (hopefully) will only get better at it. This really truly changes everything in how we interact with computers. We shouldn't be downplaying this. This is real. Today. At the same time a lot of this turning "understanding" into useful output comes at a huge amount of work on the RLHF side so we shouldn't necessarily extrapolate into the future too much just based on the current progress of openai. It's still very difficult work.
2. Everyone is now paying attention to LLM's. When the world starts paying attention to things and engineers start hacking things together, big companies start pouring money into ai, maybe Google wakes up for once, students start choosing ai as their career field, etc etc - amazing things happen. AI just went from niche to something that everyone is using and thinking about.
Just because we're still a ways off from AGI, doesn't mean that the space isn't developing rapidly right now. I'm not sure autonomous driving is a fair comparable.
It's pretty crazy how much better AI offerings are today than they were a year ago. Non-Tech companies are now actually using it (and paying for it), which is a night and day difference from where it used to be.
Advances in AI are informing vehicle autonomy. Tesla is effectively developing an LLM where the road system is the domain language. Recent innovations in NeRFs are enabling volumetric object detection with a single frame from multiple cameras, etc.
Agreed. But I think you're referring to "AI innovation" and the author (and others) are referring to "AI product innovation," i.e. how AI is actually showing up in people's everyday life - and that pace is absolutely overwhelming.
> the pace is not "overwhelming". You just started paying attention to it.
I've been paying some attention since 2015 when I worked a neural net gig. And the pace does seem to be increasing. Just in a small corner of LLMs we see LLaMa and then alpaca and then the LORA fine tuned version of the LLaMa params and then the various effects of different levels of quantization - allowing some of these models to run on things like RPis. More people are working on this stuff now meaning that the pace is going faster. Yeah, most of it is incremental, but that doesn't mean that it isn't difficult to keep up with where the best results are being found.
> The output is still (very pretty) garbage >10% of the time, and that's the danger zone...the Level 4 of autonomous thinking.
What use cases do you have for it? Isn't it the other way? I'd argue that the output is garbage <10% of the time, and it's those times you have to be a little careful for.
What else explains the massive adoption if it isn't providing a decent amount of utility?
Overall I agree with the sentiment though. It's like people underestimating how long it takes to complete the last 10% of a software project.
Honestly, I don't know yet. Maybe it's a better version of those useless support chatbots that were ubiquitous during the last AI hype cycle?
Beyond that, you start to get into questions like "how much better is this technology than competing technology X, at benchmarks I care about?". But those questions don't capture the hive mind in quite the same way.
> I'd argue that the output is garbage <10% of the time, and it's those times you have to be a little careful for.
I don't know where the "bullshit threshold" is for a given task, but I've already seen articles claiming that programmers will be obsolete, citing GPT-generated code that was incorrect. So, you know...YMMV.
> What else explains the massive adoption if it isn't providing a decent amount of utility?
Hmmmm...I forgot the word. Begins with "H"...hyphen? Hippy? Hippo? Hyperbola?
> Having worked in the space on-and-off since my PhD, the pace is not "overwhelming". You just started paying attention to it.
Really? Maybe I too only recently "started paying attention to it", but it's way above my other go-to example of churn, which is JavaScript ecosystem - and that has a benefit of mostly being fashion and faux-innovation. There's nothing comparable I can think of that's been advancing as fast in recent years as AI/ML.
If you told me that LLMs should be trusted to drive cars or operate any other weaponry, I'd tell you it's never going to happen. But they're not: the jobs they'll be doing are the liberal arts/management jobs that have a tremendous fudge factor, close is more than good enough, and failure has few consequences[*]. If an LLM doesn't give me what I want, I can tweak it and try again; absolutely no sweat. Difficult to do after a fatal car accident.
edit: and the ability to construct intention from input has to make them ideal for interfaces (from an ignorant layman's pov.) Is anybody using generative AI to create interfaces for prosthetic limbs or other peripherals? Search engines offering them in order to interface with the internet seems like the least interesting thing to do with them, although (as I mentioned above) with artistic/creative output it's difficult to define failure.
This is the kind of sober take that is sadly missing from these discussions. Reading all the people LARPing about how ChatGPT can solve everything that humans did until months ago is amusing, but gets tiresome pretty quickly.
And I say that as someone enthusiastic enough about it to pay for premium acess to the thing. Has been an amazing programming assistant, I use it as a web search on steroids.
But I think the advancements on the medium term will be only refinements over what is already here. It will be quite a while for another leap.
what can be seen as overwhelming is the pace of news hitting the mainstream and that you might be expected to have seen if you're active in the industry
I'm annoyed by the large amount of minor projects that are interesting but belong in specialised news hitting front pages of large sites, and then people expecting you to have checked them out just because of it
I don't know how far AIs will or can get, but you should be scared of people and organizations first.
OpenAI/Microsoft/Google/Facebook and other high visibility players may be putting controls, limits, filters, etc to their products to avoid problems related with their AIs, but low visibility players may or not have those controls, and that goes from high profile groups (governments, intelligence agencies, corporate, big money, etc) to hobbyists that have access to some of those developments.
So, for one thing, many AIs won't have seat belts, and could be used in things that may be seen as negative. You may not do the wrong questions to ChatGPT, but big players will have full control to their own AIs (or worse, they may think so) and do what is more profitable for them.
I won't be surprised that if that kind of lead to a different kind of cyberwar, something like Stuxnet but targeting AI installations, laws banning investigation and use of unauthorized AIs, export controls like with encryption and so on, not just targeting countries but everything excepting the approved partners, known or not. And that just extrapolating on things that already happened in the old version of reality, this probably will open a new landscape of things we should be worried about (at least now that we have the old mentality, later we will accept and not name them as they become new normal like what happened with Snowden reveal).
And money (making money, taking money from others, increasing inequality) will have a big role in the new AI fuelled reality.
I was thinking more in the line of hacking the systems they run in and deleting/corrupting everything, at the point of reaching the backups even.
But rigging their base informations, or injecting "knowledge" that somewhat mess up things, that their partners are instructed to ignore is another kind of possible attack.
>In every single Board Meeting in 2023, every Fortune 500 CEO is being asked about what their AI strategy will be, and I’m not sure most of them will have good answers. That’s a big deal
Years ago, I remember similar things regarding “Big Data” strategy. A very easy reach is building your AI strategy utilizing that, even if your big data stuff has stagnated (that trend didn’t die it just stopped being marketed with the same enthusiasm)
>Global regulators took more than a decade to react to the emergence of social media companies, and even then it felt like regulators had dated understandings of how those companies operated. What happens this time?
Same as social media, same as crypto. Many institutions move much slower than tech.
There is likely to be quick regulation related to war usage though.
Thank you for reading! I appreciate all the feedback in this thread (this is OP).
I think the Big Data comparison is the right one, though it feels like AI is moving even faster (certainly on the product innovation side of things). Notably, though, there was a TON of money spent building out the Modern Data Stack, as players like Snowflake emerged in that era. So if AI can mirror that success, but faster, I think that's pretty amazing.
Agreed on the regulation point, thought it again feels to me like AI is evolving faster than prior tech breakthroughs (feel free to disagree). If for prior loops the regulation was already slow and lackluster, I do worry about how it keeps up with faster and faster innovation.
With all the hype, if this doesn’t pan out, then the technologists will lose a lot of credibility like they did with the predictions around self driving cars and 3D printing. You can only make so many claims of imminent life-changing tech before people just call your bluff, depending on society’s tolerance.
Unlike all the Waymo cars and 3D printers, GPT4 is already out there. Anyone can sign up and use it. As long as OpenAI can scale it, even if no further improvements happen it (or a variant of it), will soon be as mainstream in companies as MS Office Suite of Products.
The Microsoft office copilot demo blew me socks off and I really don’t say things like that lightly. They have use cases figured out and implemented already.
In my opinion, there’s nothing to keep up with. The future of AI is calling an api. Learning the math or theory is worthless. Better to just understand and leverage the capabilities
This is the bargaining phase of denial.
I've observed this in artists and other AI communities, where they comfort themselves with "But every job will be automated" "UBI is coming~". Its just a way to avoid making the hard transition.
People who don't understand the basic 'theory/math', won't even understand things like parameter count, or the tradeoff between intelligence and response-time/running costs, a very important consideration for any real project.
From everything we've seen in AI thus far, it has become more complicated, not less. Prompt engineering has not gotten easier, with the explosion of use cases, it takes real skill and imagination to understand how to optimize prompts.
Looking at stable diffusion, new techniques like controlnet are incredibly powerful but beyond the average person to use. LORAs make fine-tuning accessible, but also expected of any professional in the future.
Every white collar company beyond a hundred people, in the future, will probably want their own fine tuned models/plugins/model grounding. Hiring people to deliver even 10% extra accuracy is worth it because AI's are so useful. People need to understand how they work under the hood.
Maybe. I work in an NLP research division of big tech. I just watched ChatGPT obsolete five years of applied NLP systems. We took open problems we were working on and saw that ChatGPT solved them out of the box.
Well, previously you were a 'researcher' working on 'open problems'. Now the problem has been solved, time to become an 'engineer' working on implementing/optimising the solutions.
The usage of AI will spike 100x going forward, from a mere curiosity inside the company which only affects management decisions (Say sentiment analysis), to extremely intensive direct interactions with customers. This will drastically increase the need for reliability and tractability of the results. NLP researchers simply have to accept 50% of what they did is useless, and reorient around the other 50%, its the risk of researching in an open field.
Curious to know what product you worked on and how is the team dealing with the realization of ChatGPT and all. Have you thought of what to take up next?
> People who don't understand the basic 'theory/math', won't even understand things like parameter count, or the tradeoff between intelligence and response-time/running costs, a very important consideration for any real project.
I am not sure anyone needs to know the parameter count if you are using GPT4 or similar powerful LLM. Most people also would never need to know "the tradeoff between intelligence and response-time/running costs".
> People need to understand how they work under the hood.
Most people don't understand how computers work, and yet derive immense value from them (Non engineers, even most programmers I have met have no clue how the computer works mostly)
With enough time, it will be democratized. Everything eventually does. It's the reality of living in the techno-information age. With enough time, enough of the information gets accumulated and used by enough of the masses.
Used to be that only large established cabinetmaking shops could produce quality cabinets at scale, now any joe schmoe can rig up a CNC machine, buy a 600$ dewalt planer, and a few other budget tools and can do it in their garage.
It used to be only hollywood studios could produce movies, scenery, and editing at professional levels, now film nerds with programming chops are creating Mandolorian like experiences with Unreal engine.
It used to be only large recording studios could produce the latest beats and set trends in various music genres. Now nerds like Zedd (well not anymore but he started out that way) do it on their personal computers and do it better than most producers.
the way I'm approaching it: I don't need to walk the walk, but I should be able to talk the talk (i.e., understand on some level what is happening even if I couldn't recreate it)
Another comment talked about the need for legislative controls, but mostly to account for the risk of societal breakdown.
I have the same comment but a different domain:
> After that, focus should then shift to dealing with the obsolete worker population in the maximally economical way.
Tax companies at 2x the single-taxpayer withholding amount lost per employee displaced.
So if an employee normally makes 100,000 per year and the average withholding is ~35,000, if 10 jobs are displaced, the tax charge is 700,000. Company still saves 300k, and the AI displacing those ten heads probably Costa 10k per license anyway, so there's still room for AI to make economic sense while backstopping the sociological consequences.
Put that tax revenue directly towards social services or minimum monthly reimbursement for everyone (UBI, just with a different name)
1. Why doesn't every company try to hire contractors then? Making firing employees expensive beyond reason, always seems to significantly increase unemployment rate (Eg France)/drastically increase informal employment (Japan)
2. Why isn't this money paid to the fired workers directly? They are the ones who suffer the most, so why isn't the compensation directly routed to them, but instead through a grubby government? This can be simply done by doubling the severance pay requirements, no need for complicated determination of "jobs lost because we used robots"
3. If AI becomes dominant, there will be new roles such as prompt engineers, that could pay surprisingly well. However, making hiring/firing more risky, will pretty much eliminate business appetite for experimenting with new roles.
People arm-chairing policy requirements need to consider that bureaucracy, just like programming specifications, are not free, best to keep it as simple as possible.
Also, in Europe, its pretty much very difficult to fire full time employees already, so we'll see how their economy responds to the AI shock. Judging by the total absence of nearly any continental European AI company (It seems like the only one successful is DeepL), it probably won't end well for them.
> 3. If AI becomes dominant, there will be new roles such as prompt engineers, that could pay surprisingly well.
A lot seems to be relying on the idea that this role will pay well or will somehow reach an equivalent level of complexity as modern knowledge work, but that just seems to be speculation at this point. As the complexity of a tool like ChatGPT increases, the complexity (skill) required for prompt engineering might diminish to the point where customers are just interacting with the tool instead of a prompt engineer who can "translate the business logic". Or it might go the other way. No one knows really at this point.
> 1. Why doesn't every company try to hire contractors then? Making firing employees expensive beyond reason, always seems to significantly increase unemployment rate (Eg France)/drastically increase informal employment (Japan)
This doesn't make firing expensive beyond reason, it makes it only modestly cheaper as opposed to absurdly cheap. Napkin maths:
* Per annum 200,000 spent per employee (100,000 plus benefits, perks, infrastructure, etc.)
* Per annum 25,000 spent per AI agent (10,000 per license plus infrastructure)
* Without tax, 175,000 savings.
* With tax of 70,000 per displaced employee (approximate 2x federal withholding of a Single employee in the US), 105,000 savings to use AI.
This isn't rigorous math, but it should help get the point across.
---
Now, if you just make it a salary conversation:
* 100,000 per employee
* 10,000 per license
* 90,000 savings based on salary v. license alone
* 20,000 savings after that tax.
Either way, AI comes ahead. This just makes sure some of that savings goes into the common welfare rather than upstream to investor pockets.
I'm not convinced the pollution of AI's training sets from a feedback loop of self or other AI output is a solvable problem. For all we know, the present is capable of the best AI output we'll ever get and the quality will steadily and irreversibly degenerate.
When something is going to make billions and billions of money, all problems become solvable in absence of a theoretical constraint. They could always train only on cutoff date data with bigger models or better algorithms. I don't think a decline in quality is going to happen. Worst case we will just see low improvements.
When something is worth billions, someone will just strip-mine it and not worry about what it looks like in the future. Google destroyed the internet and turned it into shitty seo farm, rendering it barren wasteland, while making tons of money. Twitter and Facebook destroyed human attention and turned us into roving zombies staring at screens. If OpenAI can, they'll happily drown us in hollow chatbot output while they extract what value they can.
It’s a complex issue. The size and purity of the training data is a big factor in the quality, and may always be. As popularity explodes, preventing pollution of large global data sets like we have now may be impossible. The problem is theoretical in the sense that separating organic and AI data is already difficult and will necessarily become harder or impossible.
Thanks Seth! I appreciate your response here. Fair point that the "no business is safe" comment might be a little far reaching.
My point more broadly is that no business out there can just ignore AI entirely: it has to be a thoughtful part of your strategy.
I 100% agree with you point on owning the data. I wrote a similar post recently on the importance of data moats, and why I think they're the most important part of defensibility in the age of AI here:
I think the main we need to do at this point is political, to prepare of the massive social and economic disruption. The first order of business should be the implementation (in jurisdictions that don't have it) of key escrow and restrictive gun control, to head off insurrectionist threats. After that, focus should then shift to dealing with the obsolete worker population in the maximally economical way.
> After that, focus should then shift to dealing with the obsolete worker population in the maximally economical way.
Tax companies at 2x the single-taxpayer withholding amount lost per employee displaced.
So if an employee normally makes 100,000 per year and the average withholding is ~35,000, if 10 jobs are displaced, the tax charge is 700,000. Company still saves 300k, and the AI displacing those ten heads probably Costa 10k per license anyway, so there's still room for AI to make economic sense while backstopping the sociological consequences.
Put that tax revenue directly towards social services or minimum monthly reimbursement for everyone (UBI, just with a different name)
> Tax companies at 2x the single-taxpayer withholding amount lost per employee displaced....
> Put that tax revenue directly towards social services or minimum monthly reimbursement for everyone (UBI, just with a different name)
Nah. Politically, that's completely unrealistic. It also immorally directs too many resources away from the productive parts of society.
We have to balance our pursuit of technology with the need for economic productivity and efficiency.
A more realistic solution is state-sanctioned homeless camps in convenient, out-of-the-way places. Corporate taxes would probably have to increase a bit to pay for security, pacifying drugs, and food; but no more than what's necessary for containment. The camps will of course need to be sex segregated, to avoid wasting resources in perpetuity.
> The AI effect occurs when onlookers discount the behavior of an artificial intelligence program by arguing that it is not real intelligence.
> Author Pamela McCorduck writes: "It's part of the history of the field of artificial intelligence that every time somebody figured out how to make a computer do something—play good checkers, solve simple but relatively informal problems—there was a chorus of critics to say, 'that's not thinking'." Researcher Rodney Brooks complains: "Every time we figure out a piece of it, it stops being magical; we say, 'Oh, that's just a computation.'"
But don't forget the near antonym ELIZA effect, helpfully linked from the same Wikipedia page. People love to project cognitive explanations onto any complex system behaviors that they observe.
I only briefly toured the AI field over 25 years ago after the prior AI winter was old news. I remember a couple epigrams heard from specialists at the time:
"AI is just advanced algorithms", which I think is a (slightly bitter, mostly pragmatic) reflection on the AI effect by people who had had to adjust their outlooks after that AI winter.
"Play syntactic games, win syntactic prizes", which I think was meant to remind one of the ever-present ELIZA effect.
My biggest worry about this current AI hype cycle is for how it will be marketed with the same kind of obfuscation the Web 2.0 and after user-driven content distribution systems. I'm afraid most consumers do not understand how much smoke and mirrors is involved in making products seem smarter and more authoritative than they actually are. How users themselves subconsciously cover real gaps in the correctness or completeness of the products and put more faith in them than is really warranted.
What is happening here is the opposite. We have those infinitely hyped chatboxes (that, granted, can currently aimlessly chat better than me, what's impressive) that are hyped into silver bullets that can kill any monster. But all they can do is to chat, by construction.
At the same time, other AIs are constantly progressing at other, useful things, completely ignored by nearly everybody.
> But all they can do is to chat, by construction.
Perhaps, but with the ability to traverse the web and interface with applications, there might actually be a qualitative difference from chatbots of yore.
> At the same time, other AIs are constantly progressing at other, useful things, completely ignored by nearly everybody.
Well, yeah, the chatbots of today are qualitatively different from the chatbots of yore. That doesn't turn them into secretaries, no matter how much they can mislead you to think they are.
> Interesting, such as what?
Factory robots, terrain traveling, object recognition, speech recognition, voice syntesis... On the symbolic side, have you looked at modern compilers and linters?
Language models should have a huge impact on search. And yes, search is a very impactful thing. But that too requires something a bit different from a chatbot. Not a complete new technology, but different applications.
The motives are different this time. Those in the know are deliberately downplaying and misdirecting to stave off panic and political meddling. So far it seems to be working. The people I know who aren't following closely don't seem to realize how we may be on the brink of something very big. And these are very smart and technical people.
I do wonder if the intelligence agencies are already deeply involved. If OpenAI is working with them training no-guardrails AIs.
I also wonder if the CCP/PLA is pushing Baidu's AI efforts, or if Baidu is working on its own and trying to avoid official attention.
Yup, I have used tons of chatbots over the years and they all look like worthless toys compared to GPT4 . Saying that they are similar is like saying mobile phones are similar to old diaries.
50% of software engineers unemployed in 2 years, 5 years, 10 years?
Or something like 50% of the US/world population using a chat-gpt style assistant to perform a complicated mental task (aka not "Alexa, set a 2 minute timer.") once a day/week in 2 years, 5, 10?
Don't get me wrong, chatGPT's emergent intelligence is impressive and I'm playing with Alpaca 13B and other models locally, but I'm not sure it's going to be as transformative and as quickly as many here seem to think. Humans and society are inherently resistant to and slow to change.
The world bank currently pegs working age population 15-64 at 5.12B, so that'd mean chatgpt would put nearly 6% of the working age population out of a job (ofc depending on your definition of "displace").
Create a detailed counterpoint to the mentioned post. Refer to all the instances where similar sentiments towards technological advances have been false, especially in the context of AI. Cite your sources using Chicago style. Everything as a LaTex Document. Where every sentence is first writen in English and then in German translation. Mark the German translation in green. The Document shall be in A4 format. Cite the comment in the beginning. Refer to the original poster not by his Hackernews name but by "Unidentified Source 1".
That's exactly what it is despite the "gpt bros" playing it up. It's literally a scaled up Eliza, that has billions of substitutions instead of tens. People just get dazzled by it and read too far in to what it's doing
My impression too. They keep saying "just wait a year or two and...". And they might as well be right, but for now it's just a slightly better way to find information online.
ChatGPT and Stable Diffusion are good, but they're incremental gains over what came before, which were incremental gains over what came before that, and so on...this has been happening for a long time now. If you ask me, the "amazing" part of ChatGPT isn't the output, it's the query understanding. The output is still (very pretty) garbage >10% of the time, and that's the danger zone...the Level 4 of autonomous thinking.
At the risk of saying something that makes me look dumb in ten years, this moment feels a lot like the "Level 5 self-driving will make driving obsolete by 2020!" panic we had circa 2015. Lots of investors and tech enthusiasts told me how stupid and shortsighted I was back then, when I said that we hadn't made as much progress as they thought we made, and how steep that remaining curve would be.
Newsflash: it's 2023. We still drive.