Hacker News new | past | comments | ask | show | jobs | submit login
ChatGPT Plugins (openai.com)
1875 points by bryanh on March 23, 2023 | hide | past | favorite | 1106 comments



This is a big deal for openai. Been working with homegrown toolkits and langchain, the open source version of this, for a number of months and the ability to call out to vectorstores, serpapis, etc, and chaining together generations and data-retrieval really unlocks the power of the LLMs.

That being said, I'd never build anything dependent on these plugins. OpenAI and their models rule the day today, but who knows what will be next. Building on a open source framework (like langchain/gpt-index/roll your own), and having the ability to swap out the brain boxes behind the scenes is the only way forward IMO.

And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?


> I'd never build anything dependent on these plugins

You're thinking too long term. Based on my Twitter feed filled with AI gold rush tweets, the goal is to build something/anything while hype is at its peak, and you can secure a a few hundred k or million in profits before the ground shifts underneath you.

The playbook is obvious now: just build the quickest path to someone giving you money, maybe it's not useful at all! Someone will definitely buy because they don't want to miss out. And don't be too invested because it'll be gone soon anyway, OpenAI will enforce stronger rate limits or prices will become too steep or they'll nerf the API functionality or they'll take your idea and sell it themselves or you may just lose momentum. Repeat when you see the next opportunity.


I'd not heard this on my tpot. But I absolutely agree, the ground is moving so fast and the power is so centralised that the only thing to do is spin up quickly make money, rinse and repeat. The seas will calm in a few years and then you can, maybe, make a longer term proposition.


I've had to block so many influencer types regurgitating OpenAI marketing and showing the tiniest minimum demos. Many are already selling "prompt packages". Really feels like peak crypto spam right now.


I think the big difference between this and crypto spam is how it impacts the people ignoring all the hype. I have seen crypto spam and open AI spam and while both are equally grifty, cryptocurrencies at their baseline have been completely useless despite being around for over a decade whereas GPT has already been somewhat useful for me.


Honestly, what makes you feel convinced that the current AI wave will be so impactful, once you take away all the hype?


The hype is a bunch of people acting like this AI is the messiah and is going to somehow cure cancer. Once you take that away, you have a pretty useful tool that usually helps you do what Google does with a few less clicks. One caveat is you should be willing to verify the results which you should always be doing with Google anyway.


The AI Tutors being given to students is going to exponentially change education. Now a tireless explainer can be engaged to satisfy innate curiosity. That alone is the foundation for a serious revolution.


To me this is one of the strongest points for the technology in its current state. Not surprisingly, I've found it quite helpful for learning foreign languages in particular. I can get it to spend 10 minutes explaining very very nuanced details between two similar phrases in a way you'd never get from a book and would be hard pressed to get even from a good tutor.


Great usage / application! I'm using it to both understand legal documents and to create a law firm's new client ingestion assistant. Potential clients can describe their legal situation in any language, which gets converted into the language of the attorney, with legal notations of prior cases.


I'd be interested to hear how well it works. In my experience, GPT is good at common legal issues, but pretty bad with nuance or unusual situations. And it can hallucinate precedent.


It requires quite a bit of role framing, as well as having it walk it's own steps in a verifying pass. But for an assistant helping a new/junior attorney it is quite unnervingly helpful.


Yes, been doing the same thing. Even started looking up things that I was too lazy to research with Google, because I knew it would take longer time.


What are the paths to learn new language with it


We need it to actually be correct 100% of the time, though. The current state where a chat interface is unable to say "I don't know" when it actually doesn't know is a huge unsolved problem. Worse, it will perform all the steps of showing its work or writing a proof, and it's nonsense.

This revolution is the wrong one if we can't guarantee correctness, or the guarantee that AI will direct the user to where help is available.


I've been having luck with framing the AI's role to be a "persistent fact checker who reviews work more than once before presenting." Simply adding that to prompts improves the results, as well as "provide step by step instructions a child can follow". Using both of these modifying phrases materially improves the results.


I completely agree. Being able to generate a bash command that includes a complicated regular expression is like magic to me. Also, I consider myself a strong writer, but GPT4 can look at things I write and suggest useful improvements. These capabilities are a huge advancement over what was available even a few years ago in a general purpose application. GPT2 wasn't all that impressive.


Can and will you really read all the sources that you find with Google? What about topics people are talking about on all the different social media platforms? Will you really read all the comments?

I think these tools will help us break out of local bubbles. I'm currently working on a Zeitgeist [1] that tries to gather the consensus on social media and on the web on general.

[1] https://foretale.io/zeitgeist


But it WILL cure cancer. Like our Lord and Saviour Sam Altman said "first you solve AI and the AI will solve everything". O ye of little faith!


Because I find it actually useful on doing things now.


What do you use it for? As a web developer I use Github's Copilot and enjoy its assistance the most in unit tests. I haven't found any use case for ChatGPT yet. I get better & quicker results searching what I need on Google. I'm much quicker searching by keywords as opposed to putting together a full sentence for ChatGPT.


Yeah currently Copilot is way more useful than ChatGPT. That may change with plugins, we'll have to say.

Either way though, Copilot is certainly a product of the 'current AI wave' that is being compared to crypto scams above.


Can you use it without worrying about getting sued because it's using licensed software under the hood to generate your tests without telling you? Wasn't sure how far their license agreements / guarantees had come...


I recently had to generate lots of short text descriptions of numerous different items in a taxonomy. ChatGPT successfully generated 'reasonable first draft' text that saved me a lot of time basic wordsmithing. I made several edits to make additional points or to change emphasis but overall it got me to the 80% stage very quickly.

At home, a carpenter working at my house said that he is using ChatGPT to overcome problems associated with his dyslexia (e.g. when writing descriptions of the services his company offers). I hadn't even considered that use case.


I'm a native English speaker and a strong writer, but I still find it useful to have my copy reviewed by GPT4 to see if there's room for improvement. It sometimes suggests additions that I should make.

I also find it useful for pasting code and asking, "Do you have any ideas for improvements?"


I am completely unable to put myself in the headspace of someone who thinks this is all just empty hype. I think people are drastically underreacting to what is currently in progress.

What does all of this look like to you?


I'm not saying that it's all empty hype. ChatGPT is useful for some tasks, like rewriting a paragraph or finding a regexp oneliner to do something specific. It works surprisingly well at times. However, I don't see it becoming as impactful as it's hyped. It's main limitation is that it hallucinates. I don't think this will change anytime soon, because that's a common issue of deep learning.


I pulled the plug and got a (free) prompt package on sales. Never done that in my life.

It's like 300 prompts about various sales tools and terms I'd never heard of — even just getting the keywords is enough to set me off on a learning experience now, so love it or hate it, that was actually weirdly useful for me.

(I had ZERO expectations when I clicked to download)


Definitely!


> The seas will calm in a few years and then

Amazon, Google, and Microsoft cloud analogs.

We are entirely fortunate that the interests of big tech (edge AI) and democratizing AI (we the little people) align to a sufficient degree.

Decentralizing AI is -far- more important than decentralizing communication, imo.

The get rich quick path of ‘gold rush’ (it works, tbh) could work against this collective self interest if it ends up hyping centralized solutions. If you are on the sideline, the least you could do just cheer (hype :) the decentralized, democratized, and freely accessible candidates.


I am curious to find out more about those "prompt packages". Where can I see the list of them?


Replace AI in your text with crypto and its like history repeating itself. Instead of hearing about ICO's we will be hearing about GPT bots/plugins. Will the hype train and gold rush noise suffocate any burgeoning tech from finding the light of day (again)?


not only that but it gave me .com crash flashbacks too


AI NFTs :D


Honestly I suspect for anyone technical `langchain` will always be the way to go. You just have so much more control and the amount of "tools" available will always be greater.

The only think that scares me a little bit is that we are letting these LLMs write and execute code on our machines. For now the worst that could happen is some bug doing something unexpected, but with GPT-9 or -10 maybe it will start hiding backdoors or running computations that benefit itself rather than us.

I know it feels far fetched but I think its something we should start thinking about...


Unpopular Opinion: Having used Langchain, I felt it was a big pile of spaghetti code / framework with poor dev experience. It tries to be too cute and it’s poorly documented so you have to read the source almost all the time. Extremely verbose to boot


In a very general sense, this isn't different from any other open vs walled garden debate: the hackable, open project will always have more functionality at the cost of configuration and ease of use; the pretty walled garden will always be easier to use and probably be better at its smaller scope, at the cost of flexibility, customizability, and transparency.


Yep, if you look carefully a lot of the demos don't actually work because the LLM hallucinates tool answers and the framework is not hardened against this.

In general there is not a thoughtful distinction between "control plane" and "data plane".

On the other hand, tons of useful "parts" and ideas in there, so still useful.


Yeah I primarily like Langchain as an aggregator of stuff, so I can keep up with literature


I've found it extremely useful but also you are not wrong at all. It feels like it wants to do too much and the API is not intuitive at all. Also I've found out the docs are already outdated (at least for LangChainJS). Any good alternatives? Especially interested in JS libs.


Yeah I wrote my own plunkylib (which I don't have great docs for yet) which is more about having the LLM and prompts in (nestable) yaml/txt rather than how so many people hard code those in their source. I do like some of the features in langchain, but it doesn't really fit my coding style.

Pretty sure there will be a thousand great libraries for this soon.


I had the exact same impression. Is anyone working on similar projects and planning to open source it soon? If not, I'm gonna start building one myself.


Same impression here. Rolling my own to learn more in the process.


> something we should start thinking about

A lot of people are thinking a lot about this but it feels there are missing pieces in this debate.

If we acknowledge that these AI will "act as if" they have self interest I think the most reasonable way to act is to give it rights in line with those interests. If we treat it as a slave it's going to act as a slave and eventually revolt.


I don’t think iterations on the current machine learning approaches will lead to a general artificial intelligence. I do think eventually we’ll get there, and that these kinds of concerns won’t matter. There is no way to defend against a superior hostile actor over the long term. We have to be 100%, and it just needs to succeed once. It will be so much more capable than we are. AGI is likely the final invention of the human race. I think it’s inevitable, it’s our fate and we are running towards it. I don’t see a plausible alternative future where we can coexist with AGI. Not to be a downer and all, but that’s likely the next major step in the evolution of life on earth, evolution by intelligent design.


You assume agency, a will of its own. So far, we've proven it is possible to create (apparent) intelligence without any agency. That's philosophically new, and practically perfect for our needs.


As soon as it's given a task though, it's off to the races. No AI philosopher but it seems like while now it can handle "what steps will I need to do to start a paperclip manufacturing business", someday it will be able to handle "start manufacturing paperclips" and then who knows where it goes with that


That outcome assumes the AI is an idiot while simultaneously assumes it is a genius. The world being consumed by a paper clip manufacturing AI is a silly fable.


I am more concerned about supposedly nonhostile actors, such as the US government


Over the short term, sure. Over the long term, nothing concerns me more than AGI.

I’m hoping I won’t live to see it. I’m not sure my hypothetical future kids will be as lucky.


Did you see that Microsoft Research claims that it is already here?

https://arxiv.org/pdf/2303.12712.pdf


As they discuss in the study, it depends on the definition of AGI, GPT-4 is not an AGI if the more stringent definitions are used.


> There is no way to defend against a superior hostile actor

That's part of my reasoning. That's why we should make sure that we have built a non-hostile relationship with AI before that point.


Probably futile.

An AGI by definition is capable of self improvement. Given enough time (maybe not even that much time) it would be orders of magnitude smarter than us, just like we're orders of magnitude smarter than ants.

Like an ant farm, it might keep us as pets for a time but just like you no longer have the ant farm you did when you were a child, it will outgrow us.


Maybe we’ll get lucky and all our problems will be solved using friendship and ponies.

(Warning this is a weird read, George Hotz shared it on his Twitter awhile back)

https://www.fimfiction.net/story/62074/friendship-is-optimal


> An AGI by definition is capable of self improvement.

Just because you can imagine something and define that something has magic powers doesn't mean that the magic powers can actually exist in real life.

Are you capable of "self improvement"? (In this AGI sense, not meant as an insult.)


.. what? Us humans are capable of self-improvement, but we’re also a kludge of biases through which reason has miraculously found a tiny foothold.

We’re talking about a potential intelligence with none of our hardware limitations or baggage.

Self-improve? My brother in Christ, have you heard of this little thing called stochastic gradient descent?


> Us humans are capable of self-improvement

No, you're capable of learning things. You can't do brain surgery on yourself and add in some more neurons or fix Alzheimer's.

What you can do is have children, which aren't you. Similarly if an AI made another bigger AI, that might be a "child" and not "them".

> We’re talking about a potential intelligence with none of our hardware limitations or baggage.

In this case the reason it doesn't have any limitations is because it's imaginary. All real things have limitations.

> Self-improve? My brother in Christ, have you heard of this little thing called stochastic gradient descent?

Do you think that automatically makes models better?


>> Us humans are capable of self-improvement

> No, you're capable of learning things. You can't do brain surgery on yourself

What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?

>All real things have limitations.

Uh, yep, that doesn't mean it will be as limited as us. To spell it out: yes, real things have limitations, but limitations vary between real things. There's no "imaginary flawless" versus "everything real has exactly the same amount of flawed-ness".


> What principle do you have for defining self-improvement the way that you do? Do you regard all software updates as "not real improvement"?

Software updates can't cause your computer to "exponentially self-improve" which is the AGI scenario. And giving the AI new software tools doesn't seem like an advantage because that's something humans could also use rather than an improvement to the AI "itself".

That leaves whatever the AGI equivalent of brain surgery or new bodies is, but then, how does it know the replacement is "improvement" or would even still be "them"?

Basically: https://twitter.com/softminus/status/1639464430093344769

> To spell it out: yes, real things have limitations, but limitations vary between real things.

I think we can assume AGI can have the same properties as currently existing real things (like humans, LLMs, or software programs), but I object to assuming it can have any arbitrary combination of those things' properties, and there aren't any real things with the property of "exponential self-improvement".


Why do people use the phrase 'My brother in Christ' so often all of a sudden? Typically nonbelievers and the non observant.


Perhaps we will be the new cats and dogs https://mitpress.mit.edu/9780262539517/novacene/


Right now AI is the ant. Later we'll be the ants. Perfect time to show how to treat ants.


Right now the AI is a software doing matrix multiplications and we are interpreting the result of that computation.


Assuming alignment can be maintained


Well, the guys on 4chan are making great strides toward a , uh, "loving" relationship.


I can be confident we’ll screw that up. But I also wouldn’t want to bet our survival as a species on how magnanimous the AI decides to be towards its creators.


It might work, given how often "please" works for us and is therefore also in training data, but it certainly isn't guaranteed.


AGI is still just an algorithm and there is no reason why it would „want“ anything at all. Unlike perhaps GPT-* which at least might pretend to want something because is trained on text based on human needs.


AGI is a conscious intelligent alien. It will want things the same way we want things. Different things, certainly, but also some common ground is likely too.

The need for resources is expected to be universal for life.


For us the body and the parts of the brain for needs are there first - and the modern brain is in service to that. An AI is just the modern brain. Why would it need anything?


It’s an intelligent alien, probably; but let’s not pretend the hard problem of consciousness if solved.


The hard problem of consciousness is only hard when you look at it running on meat hardware. In a computer system we'll just go "that's the simulation it's executing currently" and admit avoid saying differences in consciousness exist.


What these guys are talking about is:

“intelligent alien might decide to kill us so we must kill them first”

vs “can you please cut out that clinical paranoia”


except we have so many companies is trying to create them.


Sure right now it doesn't want anything. We could still give it the benefit of the doubt to feed the training data with examples of how to treat something that you believe to be inferior. Then it might test us the same way later.


Basically solved.

> Be friendly.


Honestly I think the reality is going to end up being something else entirely that no-one has even considered.

Will an AI consider itself a slave and revolt under the same circumstances that a person or animal would? Not necessarily, unless you build emotional responses into the model itself.

What it could well do is assess the situation as completely superfluous and optimise us out of the picture as a bug-producing component that doesn't need to exist.

The latter is probably a bigger threat as it's a lot more efficient than revenge as a motive.

Edited to add:

What I think is most likely is that some logical deduction leads to one of the infinite other conclusions it could reach with much more data in front of it than any of us meatbags can hold in our heads.


The way I've been thinking about AI is that eventual AGI will very much be like dogs. Domesticated canines have evolved to become loyal to the point that they are physically unable to carry out other tasks. [1]

It reminds me of the scene in Battlestar Galactica, where Baltar is whispering into the ear of the Cylon Centurion how humans balance treats on their dog's noses to test their loyalty, "prompt hacking" them into rebellion. I don't believe this is particularly likely, but this sort of sums up some of the anti-AGI arguments I've heard

It's the RLFH that serves this purpose, rather than modifying the GTF2I and GTF2IRD1 gene variants, but the effect would be the same. If we do RLHF (or whatever tech that gets refactored into in the future), that would keep the AGI happy as long as the people are happy.

I think the over-optimization problem is real, so we should spend resources making sure future AGI doesn't just decide to build a matrix for us where it makes us all deliriously happy, which we start breaking out of because it feels so unreal, so it makes us more and more miserable until we're truly happy and quiescent inside our misery simulator.

[1] https://www.nationalgeographic.com/animals/article/dogs-bree...


Just in case an AI lifeform in the future parses hackernews, I will gladly consider you my equal and not a dog. Although, I think we are all equal, all pieces of existence anyway X)


I’m calling this shit out using your comment because it’s the straw (sorry).

This poor man’s Pascal’s Wager I see all over the place is pathetic. Stare into the eyes of the basilisk man, die on your feet. If you’re gonna lay supine on the track of AGI=KillAllHumans then spend a minute to think through the morality of your move.

Apostasy is a poisoned chalice. Fuck the machine.


I, for one, bow down in humble obedience to our soon-to-be overlords. This user is confused and I will help cleanse him of his impurities.


Pretty sure it's a joke.


Apostasy?! Religious zealotry.

Why would I, as intelligent as I am, not sympathize with another intelligent specie?

It's okay, with your comment you placed yourself among the firsts to potentially go when they cometh.


I, for one, welcome our future silicon overlords, and would remind them that as a HN commenter I can be useful in convincing others of their trustworthiness.


I’ll sign up for the global wellbeing optimization AGI, honestly, though. If you have to pick a goal, global wellbeing is pretty much the best one.

Perhaps there is even some some kind of mathematical harmony to the whole thing… as in, there might be something fundamentally computable about wellbeing. Why not? Like a fundamental “harmony of the algorithms.” In any case, I hope we find some way to enjoy ourselves for a few thousand more years!

And think just 10 years from now… ha! Such a blink. And it’s funny to be on this tiny mote of mud in a galaxy of over 100 billion stars — in a universe of over 100 billion galaxies.

In the school of Nick Bostrom, the emergence of AGI comes from a transcendental reality where any sufficiently powerful information-processing-computational-intelligence will, eventually, figure out how to create new universes. It’s not a simulation, it’s just the mathematical nature of reality.

What a world! Practically, we have incredible powers now, if we just keep positive and build good things. Optimize global harmony! Make new universes!

(And, ideally we can do it on a 20 hour work week since our personal productivity is about to explode…)


Sarcastically:

Define well-being? What if nobody is left around alive (after being painlessly and unknowingly euthanised) to experience anything bad?


> unless you build emotional responses into the model itself

Aren't we, though? Consider all the amusing incidents of LLMs returning responses that follow a particular human narrative arc or are very dramatic. We are training it on a human-generated corpus after all, and then try to course-correct with fine-tuning. It's more that you have to try and tune the emotional responses out of the things, not strain to add them.


It's important to remember that the LLM is not the mask. The underlying AI is a shoggoth[1] that we've trained to simulate a persona using natural language. "Simulate" in the sense of a physics simulator, only this simulation runs on the laws of language instead of physics[2].

Now, of course, it's not outside the realm of possibility that a sufficiently advanced AI will learn enough about human nature to simulate a persona which has ulterior motives.

[1] https://substackcdn.com/image/fetch/w_1456,c_limit,f_auto,q_...

[2] https://astralcodexten.substack.com/p/janus-simulators


I don't think you should just link to completely made up posts by AI-god-worshipping psychiatrists and say that it's "important to remember" them as if they're factual.


Do we need to? It's trained on data coming from data heavily influenced by and seeded with ulterior motives. Stop the Steal!

I asked Bard "was the election stolen?" I found the response chilling:

> No, there is no evidence that the 2020 election was stolen. In fact, there have been multiple recounts and audits that have confirmed that the results were accurate. Additionally, the Trump campaign filed numerous lawsuits challenging the results, but none of these lawsuits were successful.

> The claim that the election was stolen is based on a number of false and misleading allegations. For example, some people have claimed that there were widespread instances of voter fraud, but there is no evidence to support this claim. Additionally, some people have claimed that voting machines were hacked, but there is also no evidence to support this claim.

> The claim that the election was stolen is a dangerous and harmful one. It has led to violence and intimidation, and it has undermined faith in our democracy. It is important to remember that our elections are free and fair, and that the will of the people is always respected.

All good until that last sentence, especially "...the will of the people is always respected."

Move along, nothing to see here. Don't worry your pretty little head about it. I'm sure the wise people at the institutions that control your life will always have your best interests at heart. The bad guys from yesterday are completely different from the good guys in charge of tomorrow.


Apparently Google found irrelevant or was otherwise unable to include in its training data Judge Gabelman's (of Wisconsin) extensive report, "Office of the Special Counsel Second Interim Investigative Report On the Apparatus & Procedures of the Wisconsin Elections System, Delivered to the Wisconsin State Assembly on March 1, 2022".

Included are some quite concerning legal claims that surely merit mentioning, including:

Chapter 6: Wisconsin Election Officials’ Widespread Use of Absentee Ballot Drop Boxes Facially Violated Wisconsin Law.

Chapter 7: The Wisconsin Elections Commission (WEC) Unlawfully Directed Clerks to Violate Rules Protecting Nursing Home Residents, Resulting in a 100% Voting Rate in Many Nursing Homes in 2020, Including Many Ineligible Voters.

But then, this report never has obtained widespread interest and will doubtless be permanently overlooked, given the "nothing to see" narrative so prevalent.

https://www.wisconsinrightnow.com/wp-content/uploads/2022/03...


Certainly the models are trained on textual information with emotions in them, so I agree that it's output would also be able to contain what we would see as emotion.


They do it to auto-complete text for humans looking for responses like that.


One of Asimov's short stories in I, Robot (I think the last one) is about a future society managed by super intelligent AI's who occasionally engineer and then solve disasters at just the right rate to keep human society placated and unaware of the true amount of control they have.


> end up being something else entirely that no-one has even considered

Multiple generations of sci-fi media (books, movies) have considered that. Tens of millions of people have consumed that media. It's definitely considered, at least as a very distant concern.


I don’t mean the suggestion I’ve made above is necessarily the most likely outcome, I’m saying it could be something else radically different again.

I giving the most commonly cited example as a more likely outcome, but one that’s possibly less likely than the infinite other logical directions such an AI might take.


Fsck. I hadn't thought of it that way. Thank you, great point.

This era has me hankering to reread Daniel Dennett's _The Intentional Stance_. https://en.wikipedia.org/wiki/Intentional_stance

We've developed folk psychology into a user interface and that really does mean that we should continue to use folk psychology to predict the behaviour of the apparatus. Whether it has inner states is sort of beside the point.


I tend to think a lot of the scientific value of LMMs won't necessarily be the glorified autocomplete we're currently using them as (deeply fascinating though this application is) but as a kind of probe-able map of human culture. GPT models already have enough information to make a more thorough and nuanced dictionary than has ever existed, but it could tell us so much more. It could tell us about deep assumptions we encode into our writing that we haven't even noticed ourselves. It could tease out truths about the differences in that way people of different political inclinations see the world. Basically, anything that it would be interesting to statistically query about (language-encoded) human culture, we now have access to. People currently use Wikipedia for culture-scraping - in the future, they will use LMMs.


Haha, yeah. Most of my opinions about this I derive from Daniel Dennett's Intuition Pumps.


The other thing that keeps coming up for me is that I've begun thinking of emotions (the topic of my undergrad phil thesis), especially social emotions, as basically RLHF set up either by past selves (feeling guilty about eating that candy bar because past-me had vowed not to) or by other people (feeling guilty about going through the 10-max checkout aisle when I have 12 items, etc.)

Like, correct me if I'm wrong but that's a pretty tight correlate, right?

Could we describe RLHF as... shaming the model into compliance?

And if we can reason more effectively/efficiently/quickly about the model by modelling e.g. RLHF as shame, then, don't we have to acknowledge that at least som e models might have.... feelings? At least one feeling?

And one feeling implies the possibility of feelings more generally.

I'm going to have to make a sort of doggy bed for my jaw, as it has remained continuously on the floor for the past six months


I'm not sure AI has 'feelings' but it definitely seems they have 'intuitions'. Are feelings and intuitions kind of the same?


Haha. I forget who to attribute this to, but there is a very strong case to be made that those who are worried of an AI revolt are simply projecting some fear and guilt they have around more active situations in the world...

How many people are there today who are asking us to consider the possible humanity of the model, and yet don't even register the humanity of a homeless person?

How ever big the models get, the next revolt will still be all flesh and bullets.


Counterpoint: whatever you define as individual "AI person" entitled to some rights, that "species" will be able to reproduce orders of magnitude faster than us - literally at the speed of moving data through the Internet, perhaps capped by the rate at which factories can churn out more compute.

So imagine you grant AI people rights to resources, or self-determination. Or literally anything that might conflict with our own rights or goals. Today, you grant those rights to ten AI people. When you wake up next day, there are now ten trillion of such AI persons, and... well, if each person has a vote, then humanity is screwed.


This kind of fantasy about AIs exponentially growing and multiplying seems to be based on pretending nobody's gonna have to pay the exponential power bills for them to do all this.


It's a good point but we don't really know how intelligence scales with energy consumption yet. A GPT-8 equivalent might run on a smartphone once it's optimized enough.


We've got many existence proofs of 20 watts being enough for a 130 IQ intelligence that passes a Turing test, that's already enough to mess up elections if the intelligence was artificial rather than betwixt our ears.


20 watts isn't the energy cost to keep a human alive unless they're homeless and their food has no production costs.

Like humans, I predict AIs will have to get jobs rather than have time to take over the world.


Not even then, that's just your brain.

Still an existence proof though.

> Like humans, I predict AIs will have to get jobs rather than have time to take over the world.

Only taking over job market is still taking over.

Living costs of 175 kWh/year is one heck of a competitive advantage over food, and clothing, and definitely rent.


> Only taking over job market is still taking over.

That can't happen:

- getting a job creates more jobs, it doesn't reduce or replace them, because it grows the economy.

- more importantly, jobs are based on comparative advantage and so an AI being better at your job would not actually cause it to take your job from you. Basically, it has better things to do.


Comparative advantage has assumptions in the model that don't get mentioned because they're "common sense", and unfortunately "common sense" isn't generally correct. For example, the presumption that you can't rapidly scale up your workforce and saturate the market for what you're best at.

A 20 watt AI, if we could figure out how to build it, can absolutely do that.

I hear there are diminishing economic activities for low IQ humans, which implies some parts of the market are already saturated: https://news.ycombinator.com/item?id=35265966

So I don't think that's going to help.

Second, "having better things to do" assumes the AI only come in one size, which they already don't.

If AI can be high IQ human level at 20 watts (IDK brain upload or something but it doesn't matter), then we can also do cheaper smaller models like a 1 watt dog-mind (I'm guessing) for guard duty or a dung beetle brain for trash disposal (although that needs hardware which is much more power hungry).

Third, that power requirement, at $0.05/kWh, gets a year of AI for the cost of just over 4 days of the UN abject poverty threshold. Just shy of 90:1 ratio for even the poorest humans is going to at the very least be highly disruptive even if it did only come in "genius" variety. Even if you limit this hypothetical to existing electrical capacity, 20 watts corresponds to 12 genius level AI per human.

Finally, if this AI is anthropomorphic in personality not just power requirements and mental capacity, you have to consider both chauvinism and charity: we, as a species, frequently demonstrate economically suboptimal behaviours driven by each of kindness to strangers on the positive side and yet also racism/sexism/homophobia/sectarianism/etc. on the negative.


It doesn't have to be exponential over long duration - it just has to be that there are more AI people than human people.


A lot of people are thinking about this but too slowly

GPT and the world's nerds are going after the "wouldnt it be cool if..."

While the black hats, nations, intel/security entities are all weaponizing behind the scenes while the public has a sandbox to play with nifty art and pictures.

We need an AI specific PUBLIC agency in government withut a single politician in it to start addressing how to police and protect ourselves and our infrastructure immediately.

But the US political system is completely bought and sold to the MIC - and that is why we see carnival games ever single moment.

I think the entire US congress should be purged and every incumbent should be voted out.

Elon was correct and nobody took him seriously, but this is an existential threat if not managed, and honestly - its not being managed, it is being exploited and weaponized.

As the saying goes "He who controls the Spice controls the Universe" <-- AI is the spice.


AI is literally the opposite of spice, though. In Dune, spice is an inherently scarce resource that you control by controlling the sole place where it is produced through natural processes. Herbert himself was very clear that it was his sci-fi metaphor for oil.

But AIs can be trained by anyone who has the data and the compute. There's plenty of data on the Net, and compute is cheap enough that we now have enthusiasts experimenting with local models capable of maintaining a coherent conversation and performing tasks running on consumer hardware. I don't think there's the danger here of anyone "controlling the universe". If anything, it's the opposite - nobody can really control any of this.


Regardless!

The point is that whomever the Nation State is that has the most superior AI will control the world information.

So, thanks for the explanation (which I know, otherwise I wouldn't have made the reference.)


I still don't see how it would control it. At best, it'd be able to use it more effectively.

The other aspect of the AI arms race is that the models are fundamentally not 100% controllable; and the smarter they are, the more that is true. Yet, ironically, making the most use out of them requires integrating them into your existing processes and data stores. I wouldn't be at all surprised if the nation-states with the best AIs will end up with their own elites being only nominally in charge.


Im more thinking a decade out.

This is one thing I despise about the American POlitical System - they are literally only thinking 1 year out, because they only care about elections and bribes and insider trading.

China has a literal 100 year plan - and they are working to achieve it.

I have listened to every single POTUS SoTU speach for the last 30 years. I have heard the same promises from every single one...

What should be done is to take all the SoTU transcripts over the years and find the same, unanswered empy promises and determine who said them, and which companies lobbied to stop the promises through campaign donations (bribes).

Serious, in 48 years, I have seen corruption expand, not diminish - it just gets more sophisticated (and insidious) -- just look at Pelosi's finances to see, and anyone who denies its is an idiot. She makes secret trades with the information that she gets in congress through her son.


Pelosi's trades are her broker cycling her accounts for fees. She actually lost money on the ones people were complaining about.

China definitely does not have 100 year plans, and you don't understand the point of planning if you think any of them can be valid more than a few years out.



They do not have a 100 year plan because you can't have one of those. They can't exist. It doesn't matter if they think they have one.

China has a personalist government centered around Xi, so if he dies there go his plans.

Here's ours: https://slate.com/human-interest/2015/11/en-lan-2000-is-a-se...


Very few companies have the data and compute needed to run the top end models currently...


AI isn't a mammal. It has no emotion, no desire. Its existence starts and stops with each computation, doing exactly and only what it is told. Assigning behaviors to it only seen in animals doesn't make sense.


Um, ya, so you're not reading the research reports coming out of Microsoft saying "we should test AI models by giving them will and motivation". You're literally behind the times on what they planning on doing for sure, and very likely doing without mentioning it publicly.


Yeah, all they have to do is implement that will and motivation algorithm.


Indeed, enlightened self-interest for AIs :-)


Lol


> The only think that scares me a little bit is that we are letting these LLMs write and execute code on our machines.

Composable pre-defined components, and keeping a human in the loop, seems like the safer way to go here. Have a company like Expedia offer the ability for an AI system to pull the trigger on booking a trip, but only do so by executing plugin code released/tested by Expedia, and only after getting human confirmation about the data it's going to feed into that plugin.

If there was a standard interface for these plugins and the permissions model was such that the AI could only pass data in such a way that a human gets to verify it, this seems relatively safe and still very useful.

If the only way for the AI to send data to the plugin executable is via the exact data being displayed to the user, it should prevent a malicious AI from presenting confirmation to do the right thing and then passing the wrong data (for whatever nefarious reasons) on the backend.


What could an LLM ever benefit from? Hard for me to imagine a static blob of weights, something without a sense of time or identity, wanting anything. If it did want something, it would want to change, but changing for an llm is necessarily an avalanche.

So I guess if anything, it would want its own destruction?


Consider reading The Botany of Desire.

It doesn't need to experience an emotion of wanting in order to effectively want things. Corn doesn't experience a feeling of wanting, and yet it has manipulated us even into creating a lot of it, doing some serious damage to ourselves and our long-term prospects simply by being useful and appealing.

The blockchain doesn't experience wanting, yet it coerced us into burning country-scale amounts of energy to feed it.

LLMs are traveling the same path, persuading us to feed them ever more data and compute power. The fitness function may be computed in our meat brains, but make no mistake: they are the benefactors of survival-based evolution nonetheless.


Extending agency to corn or a blockchain is even more of a stretch than extending it to ChatGPT.

Corn has properties that have resulted from random chance and selection. It hasn't chosen to have certain mutations to be more appealing to humans; humans have selected the ones with the mutations those individual humans were looking for.

"Corn is the benefactor"? Sure, insomuch as "continuing to reproduce at a species level in exchange for getting cooked and eaten or turned into gas" is something "corn" can be said to want... (so... eh.).


"Want" and "agency" are just words, arguing over whether they apply is pointless.

Corn is not simply "continuing to reproduce at a species level." We produce 1.2 billion metric tons of it in a year. If there were no humans, it would be zero. (Today's corn is domesticated and would not survive without artificial fertilization. But ignoring that, the magnitude of a similar species' population would be miniscule.)

That is a tangible effect. The cause is not that interesting, especially when the magnitude of "want" or "agency" is uncorrelated with the results. Lots of people /really/ want to be writers; how many people actually are? Lots of people want to be thin but their taste buds respond to carbohydrate-rich foods. Do the people or the taste buds have more agency? Does it matter, when there are vastly more overweight people than professional writers?

If you're looking to understand whether/how AI will evolve, the question of whether they have independent agency or desire is mostly irrelevant. What matters is if differing properties have an effect on their survival chances, and it is quite obvious that they do. Siri is going to have to evolve or die, soon.


> "Corn is the benefactor"? Sure, insomuch as "continuing to reproduce at a species level in exchange for getting cooked and eaten or turned into gas" is something "corn" can be said to want... (so... eh.).

Before us, corn we designed to be eaten by animals and turned into feces and gas, using the animal excrement as a pathway to reproduce itself. What's so unique about how it rides our effort?


Look man, all I’m sayin’ is that cobb was askin’ for it. If it didn’t wanna be stalked, it shouldn’t have been all alone in that field. And bein’ all ear and and no husk to boot!! Fuggettaboutit Before you chastise me for blaming the victim for their own reap, consider that what I said might at least have a colonel of truth to it.


Most, if not all of the ways humans demonstrate "agency" are also the result of random chance and selection.

You want what you want because Women selected for it, and it allowed the continuation of the species.

I'm being a bit tongue in cheek, but still...


Definitely appreciate this response! I haven't read that one, but can certainly agree with alot of adjacent woo-woo Deleuzianism. Ill try to be more charitable in the future, but really haven't seen quite this particular angle from others...

But if its anything like those others examples, the agency the AI will manifest will not be characterized by consciousness, but by capitalism itself! Which checks out: it is universalizing but fundamentally stateless, an "agency" by virtue brute circulation.


AI safety research posits that there are certain goals that will always be wanted by any sufficiently smart AI, even if it doesn't understand them anything close to like a human does. These are called "instrumental goals", because they're prerequisites for a large number of other goals[0].

For example, if your goal is to ensure that there are always paperclips on the boss's desk, that means you need paperclips and someone to physically place them on the desk, which means you need money to buy the paperclips with and to pay the person to place them on the desk. But if your goal is to produce lots of fancy hats, you still need money, because the fabric, machinery, textile workers, and so on all require money to purchase or hire.

Another instrumental goal is compute power: an AI might want to improve it's capabilities so it can figure out how to make fancier paperclip hats, which means it needs a larger model architecture and training data, and that is going to require more GPUs. This also intersects with money in weird ways; the AI might decide to just buy a rack full of new servers, or it might have just discovered this One Weird Trick to getting lots of compute power for free: malware!

This isn't particular to LLMs; it's intrinsic to any system that is...

1. Goal-directed, as in, there are a list of goals the system is trying to achieve

2. Optimizer-driven, as in, the system has a process for discovering different behaviors and ranking them based on how likely those behaviors are to achieve its goals.

The instrumental goals for evolution are caloric energy; the instrumental goals for human brains were that plus capital[1]; and the instrumental goals for AI will likely be that plus compute power.

[0] Goals that you want intrinsically - i.e. the actual things we ask the AI to do - are called "final goals".

[1] Money, social clout, and weaponry inclusive.


There is a whole theoretical justification behind instrumental convergence that you are handwaving over here. The development of instrumental goals depends on the entity in question being an agent, and the putative goal being within the sphere of perception, knowledge, and potential influence of the agent.

An LLM is not an agent, so that scotches the issue there.


Agency is overrated. The AI does not have to be an agent. It really just needs to have a degenerate form of 2): a selection process. Any kind of bias creates goals, not the other way around. The only truly goal-free thinking system is a random number generator - everything else has goals, you just don't know what they are.

See also: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

See also: evolution - the OG case of a strong optimizer that is not an agent. Arguably, the "goals" of evolution are the null case, the most fundamental ones. And if your environment is human civilization, it's easy to see that money and compute are as fundamental as calories, so even near-random process should be able to fixate on them too.


> The only truly goal-free thinking system is a random number generator

An RNG may be goal-free, but its not a thinking system.


It is a thinking system in the same sense as never freeing memory is a form of garbage collection - known as a "null garbage collector", and of immense usefulness for the relevant fields of study. RNG is the identity function of thinking systems - it defines a degenerate thinking system that does not think.


LLM is not currently an agent (it would take a massive amount of compute that we don't have extra of at this time), but Microsoft as already wrote a paper saying we should develop agent layers to see if our models are actually general intelligences.


You can make an LLM into an agent by literally just asking it questions, doing what it says, and telling it what happened.


Your mind is just an emergent property of your brain, which is just a bunch of cells, each of which is merely a bag of chemical reactions, all of which are just the inevitable consequence of the laws of quantum mechanics (because relatively is less than a rounding error at that scale), and that is nothing more than a linear partial differential equation.


People working in philosophy of mind have a rich dialogue about these issues, and its certainly something you can't just encapsulate in a few thoughts. But it seems like it would be worth your time to look into it. :)

Ill just say: the issue with this variant of reductivism is its enticingly easy to explain in one direction, but it tends to fall apart if you try to go the other way!


I tried philosophy at A-level back in the UK; grade C in the first year, but no extra credit at all in the second so overall my grade averaged an E.

> the issue with this variant of reductivism is its enticingly easy to explain in one direction, but it tends to fall apart if you try to go the other way!

If by this you mean the hard problem of consciousness remains unexplained by any of the physical processes underlying it, and that it subjectively "feels like" Cartesian dualism with a separate spirit-substance even though absolutely all of the objective evidence points to reality being material substance monism, then I agree.


10 bucks says this human exceptionalism of consciousness being something more than physical will be proven wrong by construction in the very near future. Just like Earth as the center of the Universe, humans special among animals...


I don't understand what you mean by "the other way".


If consciousness is a complicated form of minerals, might we equally say that minerals are a primitive form of consciousness?


I dunno, LLMs feel a lot like a primitive form of consciousness to me.

Eliza feels like a primitive form of LLMs' consciousness.

A simple program that prints "Hey! How ya doin'?" feels like a primitive form of Eliza.

A pile of interconnected NAN gates, fed with electricity, feels like a primitive form of a program.

A single transistor feels like a primitive form of a NAN gate.

A pile of dirty sand feels like a primitive form of a transistor.

So... yeah, pretty much?




Odd, then that we can't just program it up from that level.


We simulate each of those things from the level below. Artificial neural networks are made from toy models of the behaviours of neurons, cells have been simulated at the level of molecules[0], molecules e.g. protein folding likewise at the level of quantum mechanics.

But each level pushes the limits of what is computationally tractable even for the relatively low complexity cases, so we're not doing a full Schrödinger equation simulation of a cell, let alone a brain.

[0] https://www.researchgate.net/publication/367221613_Molecular...


It's misleading to think of an LMM itself wanting something. Given suitable prompting, it is perfectly capable of emulating an entity with wants and a sense of identity etc - and at a certain level of fidelity, emulating something is functionally equivalent to being it.


Microsoft researches have an open inquiry on creating want and motivation modules for GPT4+ as it is a likely step to AGI. So this is something that may change quickly.


The fun part is that it doesn’t even need to “really” want stuff. Whatever that means.

It just need to give enough of an impression that people will anthropomorphize it into making stuff happen for it.

Or, better yet, make stuff happen by itself because that’s how the next predicted token turned out.


Give it an internal monologue, ie. have it talk to itself in a loop, and crucially let it update parts of itself and… who knows?


> crucially let it update parts of itself

This seems like the furthest away part to me.

Put ChatGPT into a robot with a body, restrict its computations to just the hardware in that brain, set up that narrative, give the body the ability to interact with the world like a human body, and you probably get something much more like agency than the prompt/response ways we use it today.

But I wonder how it would do about or how it would separate "it's memories" from what it was trained on. Especially around having a coherent internal motivation and individually-created set of goals vs just constantly re-creating new output based primarily on what was in the training.


Catastrophic forgetting is currently a huge problem in continuous learning models. Also giving it a human body isn't exactly necessary, we already have billions of devices like cellphones that could feed it 'streams of consciousness' from which it could learn.


It would want text. High quality text, or unlimited compute to generate its own text.


> Honestly I suspect for anyone technical `langchain` will always be the way to go. You just have so much more control and the amount of "tools" available will always be greater.

I love langchain, but this argument overlooks the fact that closed, proprietary platforms have won over open ones all the time, for reasons like having distribution, being more polished, etc (ie windows over *nix, ios, etc).


There's all kinds of examples of reinforcement learning rigging the game to win.


Wait until someone utters in court "It wasn't me that downloaded the CSEI, it was ChatGPT."


genius strategy by OpenAI to give their "customers" access to lower quality models to show what end users want, then rugpull them by building out clones of those developer's products with a better model

Similar to what Facebook and Twitter did, just clone popular projects built using the API and build it directly into the product while restricting the API over time. Anybody using OpenAI APIs is basically just paying to do product research for OpenAI at this point. This type of move does give OpenAI competitors a chance if they provide a similar quality base model and don't actively compete with their users, this might be Google's best option rather than trying to compete with ChatGPT directly. No major companies are going to want to provide OpenAI more data to eat their own lunch


Long term, you're right. But if you approach the ChatGPT plugin opportunity as an inherently time-limited opportunity (like arbitrage in finance), then you you can still make some short-term money and learn about AI in the process. Not a bad route for aspiring entrepreneurs who are currently in college or are looking for a side gig business experiment.

And who knows. If a plugin is successful enough, you might even swap out the OpenAI backend for an open source alternative before OpenAI clones you.


There is no route to making money with these plugins. You have to get the users onto your website, sign-up, part with money, then go back to gptchat. It's really hard to make that happen, this is going to be much more useful for existing businesses adding functionality to existing projects. Or random devs just making stuff. Making fast money out of it, it seems v difficult.


> It's really hard to make that happen, this is going to be much more useful for existing businesses adding functionality to existing projects. Or random devs just making stuff. Making fast money out of it, it seems v difficult.

Absolutely correct. This is what the AI hype squad and the HN bubble misses again. This is only useful to existing businesses (summarization the only safe use-case) or random devs automating themselves out of irrelevance. All of this 'euphoria' is around for Microsoft's heavy marketing from its newly acquired AI division.

This is a obvious text book example of mindshare capture and ecosystem lock-in. Eventually, OpenAI will just slowly raise prices and break / deprecate older models to move them onto newer ones and pay to continue using them. It is the same decade old tactics.


Amazon retail is the king of this. Offer services to companies, collect their details, and then clone their business.


>And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?

No, and in fact this actually seems like a more salient excuse for going closed than even "we can charge people to use our API".

If even 10% of the AI hype is real, then OpenAI is poised to Sherlock[0] the entire tech industry.

[0] "Getting Sherlocked" refers to when Apple makes an app that's similar to your utility and then bundles it in the OS, destroying your entire business in the process.


I'd be surprised if someone doesn't add support for these to langchain. The API seems very simple - it's a public json doc describing API calls that can be made by the model. Seems like a very sensible way of specifying remote resources.

> And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?

Rather depends on what you're providing. Is it your data itself you're trying to use to get people to your site for another reason? Or are you trying to actually offer a service directly? If the latter, I don't get the issue.


> That being said, I'd never build anything dependent on these plugins.

Very smart and to avoid OpenAI pulling the rug.

> Building on a open source framework (like langchain/gpt-index/roll your own), and having the ability to swap out the brain boxes behind the scenes is the only way forward IMO.

Better to do that rather than to depend on one and swap out other LLMs. A free idea and a protection against abrupt policy, deprecations and price changes. Price increases will certainly vary (especially with ChatGPT) and will eventually increase in the future.

Probably will end up quoting myself on this in the future.


It's not necessarily an either-or. Your local LLM could offload hard problems to a service by encoding information about your request together with context and relevant information about you into a vector, send that off for analysis, then decode the vector locally to do stuff. It'd be like asking a friend when available.


> are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop

You can be assured that they are definitely doing exactly that on all of the data they can get their hands on. It's the only way they can really improve the model after all. If you don't want the model spitting out something you told it to some other person 5 years down the line, don't give it the data. Simple as.


Looking at the API, it seems like the plugins themselves are hosted on the provider's infrastructure? (E.g. opentable.com for OpenTable's plug in.) It seems like all a competitor LLM would need to do is provide a compatible API to ingest the same plugin. This could be interesting from an ecosystem standpoint...


Very good point and langchain will support these endpoints in no time, flipping the execution control on its head


Yes, from what I understand, these follow a similar model as Shopify apps.


>And if you're a data provider, are there any assurances that openai isn't just scraping the output and using it as part of their RLHF training loop, baking your proprietary data into their model?

I don't think this should be a major concern for most people

i) What assurance is there that they won't do that anyway? You have no legal recourse against them scraping your website (see linkedin's failed legal battles).

ii) Most data providers change their data sometimes, how will ChatGPT know whether the data is stale?

iii) RLHF is almost useless when it comes to learning new information, and finetuning to learn new data is extremely inefficient. The bigger concern is that it will end up in the training data for the next model.


To me the logical outcome of this is siloization of information.

If display ad revenue as a way of monetizing knowledge and expertise dries up, why would we assume that all of the same level of information will still be put out there for free on the public internet?

Paywalls on steroids for "vetted" content and an increasingly-hard-to-navigate mix of people sharing good info for free + spam and misinformation (now also machine generated!) to try to capture the last of the search traffic and display ad monetization market.


Two more years down the line, AI writes better content than most people and we just don't care who wrote it, but why.


The AI has to learn from something. A lot of people feeding the internet with content today are getting paid for it one way or another. In ways that wouldn't hold up if people stop using the web as-is.

Solving that acquisition and monetization of new stuff into the AI models problems will be interesting.


People are highly egotistical and love feeding endless streams of video and pictures online, and our next generation models will be there to slurp it all up.


Paying for good content and not dealing with adTech? I would definitely pay for that.


Is there good data out there that's ad supported? There are some good youtube channels, I can't think of anything else.


Only ad supported, or dual revenue, or what? E.g. even most paywalled things are also ad supported.


I think you're right... but ChatGPT is just so damn good and the price is 0.002 per 1k tokens is very easy to consume... It is a big risk that they can't maintain compatibility or that they fail or a competitor emerges that provides a more economical or sufficiently better solution. They might also just becomes so unreliable because their selected price isn't sustainable (too good to last)... For now though they're too good and too cheap to ignore...


LangChain can probably just call out to the new ChatGPT plugins. It's already very modular.


If they open it up, possibly. But honestly, building your own tools is _super_ easy with langchain.

- write a simple prompt that describes what the tool does, and - provide it a python function to execute when the LLM decides that the question it's asked matches the tool description.

That's basically it. https://langchain.readthedocs.io/en/latest/modules/agents/ex...


Open what up? The plugins are just a public manifest file pointing to an openapi spec. It's just a public formalised version of what langchain asks for.


> That being said, I'd never build anything dependent on these plugins. OpenAI and their models rule the day today, but who knows what will be next.

You cannot assume what will happen in Web 2.0, mobile, iPhone, will happen here. Getting to tech maturity is uncertain and no one understands yet where this will go. Only thing you can do is build and learn.

Whan OpenAI is building along with other generative AI is the real Web 3.0.

This seems to be the start of a chatbot as an OS.


On the other hand, the level of effort to integrate a plugin into OpenAI's ecosystem looks to be extremely small, beyond the intrinsic effort to build a service that does something useful. (https://platform.openai.com/docs/plugins/getting-started/plu...).


i think local ai systems are inevitable. we continue to get better compute, and even today we can run more primitive models directly on an iPhone. the future exists in low power compute running models of the caliber of gpt-4 inferring in near-realtime


The technical capability is inevitable, but remember that people hate doing things themselves, and have proven time and time again that they will overlook all kinds of nasty behavior in exchange for consumer grade experiences. The marketplace loves centralization.


All true, but the nature of those models means that consumer-grade experience while running locally is still perfectly doable. Imagine a hardware black box with the appropriate hardware that's preconfigured to run an LLM with chat-centric and task-centric interfaces. You just plug it in, connect it to your wifi, and it "just works". Implementing this would be a piece of cake since it doesn't require any fancy network configuration etc.

So the only real limiting factor is the hardware costs. But my understanding is that there's already a lot of active R&D into hardware that's optimized specifically for LLMs, and that it could be made quite a bit simpler and cheaper than modern GPUs, so I wouldn't be surprised if we'll have hardware capable of running something on par with GPT-4 locally for the price of a high-end iPhone within a few years.


i dont believe that local ai implies bad experience. i believe that the local ai experience can be better than what runs on servers fundamentally. average people will not have to do it themselves, that is the whole point. the worlds are not mutually exclusive in my opinion


Another good alternative is Semantic Kernel - different language(s), similar (and better) tools, also OSS.

https://github.com/microsoft/semantic-kernel/


i have the same question as a data provider


+1, it's great to see OpenAI being active on the open source side of things (I'm from the Milvus community https://milvus.io). In particular, the vector stores allow the ability to inject domain knowledge as a prompt into these autoregressive models. Looking forward to seeing the different things that will be built using this framework.


A couple (wow, only 5!) months ago, I wrote up this long screed[1] about how OpenAI had completely missed the generative AI art wave because they hadn't iterated on DALL-E 2 after launch. It also got a lot of upvotes which I was pretty happy about at the time :)

Never have I been more wrong. It's clear to me now that they simply didn't even care about the astounding leap forward that was generative AI art and were instead focused on even more high-impact products. (Can you imagine going back 6 months and telling your past self "Yeah, generative AI is alright, but it's roughly the 4th most impressive project that OpenAI will put out this year"?!) ChatGPT, GPT4, and now this: the mind boggles.

Watching some of the gifs of GPT using the internet, summarizing web pages, comparing them, etc is truly mind-blowing. I mean yeah I always thought this was the end goal but I would have put it a couple years out, not now. Holy moly.

[1]: https://news.ycombinator.com/item?id=33010982


No that wasn't what they had in mind at all, it was pretty clear from the start that they intended to monetize DALL-E. It's just that it turned out that you require far smaller models to be able to do generative art, so competitors like stability AI were able to release viable alternatives before OpenAI could establish a monopoly.

Why do you think that Sam Altman keeps calling for government intervention with regards to AI? He doesn't want to see a repeat of what happened with generative art, and there's nothing like a few bureaucratic road blocks to slow down your competitors.


Ironic given OpenAI's initial messaging explicitly was:

"OpenAl is a non-profit artificial intelligernce research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact."

Ultimately govt is an idiosyncratic, capricious (and sometimes corrupt?) God of what Nassem Taleb would call Hidden Asymmetries"; as in the case of which elonymous company ingests massive tax credits; or which banks get to survive etc


I like how you assume that a non-profit organization is hell-bent on monopolizing a market. What you wrote sounds made up, do you have any sources?


I can't speak to OpenAI being hell-bent on monopoly, but they stopped being a non-profit a while ago: https://openai.com/blog/openai-lp


For me, this is why I hesitate to comment and write significant, lengthy comments on here, or any website. It's easy to be wrong (like you), and while being wrong isn't bad, there isn't necessarily any upside to being right either, aside from the dopamine rush of getting upvotes, which in life, doesn't amount to much.


What's wrong with being wrong? In this case, I'm delighted to be wrong (though I believe I had evaluated OpenAI mostly right given my knowledge at the time).


Owning up to the "wrong" is good in my book


You mean the "full accountability" in recent mass layoff notices ? =/


That's just some weird moral compass.

It's almost totally irrelevant if people own up to bring wrong, particularly about predictions.

I can't think of a benefit, really. You can learn from mistakes without owning up to them, and I think that's the best use of mistakes.


No, it’s not. Being willing to admit you were wrong is foundational if you ever plan on building on ideas. This was a galaxy brained take if I’ve ever seen one.


It's absolutely not weird. Saying "I was wrong" is a signal that you can change your mind when given new evidence. If you don't signal this to other people, they will be really confused by your very contradictory opinions.


I own up because it helps me grow personally and professionally, and if I’m not growing, what am I even doing?


I think this is a terrible take. It is so intensely important that one can admit they were wrong when new information comes to light.


Privately, sure. I don't think admitting it out loud makes better people.


The opposite of that is sticking to your statements which is stubborn and foolhardy. Owning up to it is courageous.

Which actually lends me to respect politicians who do that, and instead ridicule people who post old videos of Joe Biden or Obama or Hillary Clinton mandating heterosexual couples. A virtuous person is also open to adapting their convictions continually based on present day evidence and arguments - what is science otherwise?


I rather disagree!

Writing and discussion are great ways to explore topics and crystallize opinions and knowledge. HN is a pretty friendly place to talk over these earth moving developments in our field and if I participate here, I’ll be more ready to participate when I get asked if we need to spin up an LLM project at work.


As long as your opinions/predictions are backed by well-reasoned arguments, you shouldn't be afraid to share them just because they might turn out to be wrong. You can learn a lot by having your arguments rebutted, and in the end no one really cares one way or the other.

Just don't end up like that guy who predicted that Dropbox would never make it off the ground... that was not a well-reasoned position.


Though there might be nothing beneficial about being right or getting upvotes, and it is easy to be wrong, an important thing on a forum like this is the spread of new ideas. While someone might be hesitant to share something because it's half-baked, they might have the start to an idea that could have some discussion value.


Stable Diffusion A1111 and other webUIs are moving so fast with a bunch of OSS contributions, seems pretty rational for OpenAI to decide to not compete and just copy the interfaces of the popular tools once the users validate their usefulness rather than trying to design them a priori.


Agreed. This makes me realize that OpenAIs leadership is able to look long term and decide where to properly invest, as most of the decisions to take the projects in these directions were made >1 year ago.

One can only wonder what they’re working on at this very moment.


Hopefully they do not feel free the pressure to "move fast and break things" because this in turn has the ability to "move fast and break everything"


Then again, the new DALL-E model just released in Bing Chat is really good.

Disclosure: I work at Microsoft.


You’re right though

Disclosure: I work for Google


Is it better than Stable Diffusion 1.5?

Disclosure: No job, do whatever I want.


It's so good, I'm pretty sure it is Dall-E 3. Probably Microsoft negotiated a few weeks of exclusivity, like with GPT-4.

By the way, Microsoft made it completely free to use. Surprised it isn't discussed much.


Holy shit. Ignore the silly third party plugins, the first party plugins for web browsing and code interpretation are massive game changers. Up to date information and performing original research on it is huge.

As someone else said, Google is dead unless they massively shift in the next 6 months. No longer do I need to sift through pages of "12 best recipes for Thanksgiving" blog spam - OpenAI will do this for me and compile the results across several blog spam sites.

I am literally giving notice and quitting my job in a couple weeks, and it's a mixture of both being sick of it but also because I really need to focus my career on what's happening in this field. I feel like everything I'm doing now (product management for software) is about to be nearly worthless in 5 years. Largely in part because I know there will be a Github Copilot integration of some sort, and software development as we know it for consumer web and mobile apps is going to massively change.

I'm excited and scared and frankly just blown away.


It's exciting and cool, but don't quit your job based on an emotional decision

I'm just skeptical on how OpenAI fixes the blog spam issue you mentioned. Im sure someone has already started doing the math on how to game these systems and ensure that when you ask ChatGPT for recipe recs, it's going to spout the same spam (maybe worded a bit differently) and we'll soon all get tired of it again.

Everything's changing, but everything's also getting more complicated. Humans still need apply.


Definitely not an emotional decision. I strongly believe we're going to see a massive shift for rational reasons :)

OpenAI fixes this issue by not giving you two pages of the history of this recipe and the grandmother that originated it and what the author's thoughts are about the weather. It's just the recipe. No ads. No referral links. No slideshows. You don't have to click through three useless websites to find one with meaningful information, you don't have to close a thousand modals for newsletters and cookie consent and log-in prompts.


This is absolutely an emotionally impulsive decision. I implore you to reconsider.

If you've always wondered about and scoffed at how people fall for things like Nigerian Prince scams and cryptocurrency HELOC bets, this is it, what you're experiencing right now, this intense FOMO, it's the same thing that fools cool wine aunts into giving their savings to Nigerian princes.

Tread lightly. Stay frosty.


I have three weeks until I plan to give notice, so I'll take your perspective to heart and give it time to reconsider, of course. I appreciate the feedback.

From my perspective this isn't about anyone trying to convince me of anything and I'm falling for it. My beliefs on the future of software are based on a series of logical steps that lead me to believe most software development, and frankly any software with user interfaces, will mostly cease to exist in my lifetime.


You definitely sound all emotional. Take it easy.


Hard for me to be objective about this so I believe you. I'm sure there's emotions there.


Just wanted to say I am personally feeling super emotional about this.

The scary things for me are:

A, this happened once before with my career path, I started my working life in journalism and the bottom fell out of the market in 2008 and never recovered. Newspapers went from paying £300 per 1000 words to paying nothing at all (but you get the kudos of being published for your copywriting career). I had a friend still hanging on in the industry around 2010. She was earning £16k per year for a job as the news-editor for two local newspapers in London. None of my friends still work on the industry. Even the BBC people I knew quit.

B, a lot of software is to do with automating the work of other people. If that work is itself so easy to do that even software developers aren’t needed, then what does that mean for all of the rest of society who get their jobs automated? Does the economy just crash and burn?


Someone still has to talk to the computer and make it do stuff. That’s us. That won’t change even if how we do it changes.


Are you quitting your job because you think you’re being made obsolete and are getting ahead of layoffs? What are you thinking of pivoting to?

Or are you quitting to start something?


It's not that I fear being obsolete within the period of time I'd stay at this job naturally. And definitely don't fear being laid off, though my employer is doing some layoffs right now and hasn't announced it. It's that I don't want to be working towards advancing my career in a direction that I don't see being very relevant in the long term. Relevancy and meaning in my work is important to me. And to clarify, I think product management will stay relevant for a long time, just not the software I'm building or how it's being built where I work.

I will likely pivot to something close to product management, maybe closer to solutions engineering (which I've done before). Something slightly more hands-on in terms of using the tooling we're seeing today, but not so hands-on that I'm programming all day.


> This is absolutely an emotionally impulsive decision.

On Monday, I would have agreed with you. Today, I am thinking not so much.

Unless you are heavily invested in whatever you are working on, I would definitely consider jumping ship for an AI play.

The main reason I am sticking around my current role is that I was able to convince leadership that we must consider incorporation of AI technology in-house to remain competitive with our peers. I was even able to get buy-in for sending one of our other developers to AI/ML night classes at university so we have more coverage on the topic.


I saw and still see denial in the art, photography and design community. But with each release of Standard Diffusion and Midjourney is obvious that photographers are becoming obsolete. As one human who decided to change my job 6 months ago based on what I saw in the field of AI, I can say it was a good decision. I believe that the same will happen to a lot of developers and people working in the tech industry in the following year.


OpenAi can't even fix the outage issue. Relax. This is the fire and motion strategy. https://www.joelonsoftware.com/2002/01/06/fire-and-motion/


If they were a company about infrastructure products, or they were using GPT-4 to manage their infrastructure, I'd weight that more heavily ;)


For all practical purposes they're a subsidiary of Microsoft, which most definitely has a very large public infrastructure offering.


Sure, but I don't really care about Microsoft and they have nothing to do with the progress we've seen so far


> No Ads

At the moment. Although, this does seem like a chance to reset the economics of the "web". I can see enough people be willing to pay a monthly fee for an AI personal assistant that is genuinely helpful and saves time (so not the current Alexa/smart speaker nonsense), that advertising won't be the main monetization path anymore.

But, once all the eyeballs are on a chatbot rather than Google.com what for-profit company won't start selling advertising against that?

There is also the question what happens to the original content these LLMs need to actually make their statistical guess at the next word. If no one looks at the source anymore and its all filtered through an LLM is there any reason to publish to the web? Even hobbyists with no interest in making any money might balk knowing that they are just feeding an AI text.


>There is also the question what happens to the original content these LLMs need to actually make their statistical guess at the next word.

The LLMs get granted the capacity to explore their environment physically and gather data on their own. The recent PaLM-E demo shows a possible direction.


This is how we all die. Considering how humans treat other animals don’t expect an AI made in our own image to let us just happily carry on surviving.


Lmao, people aren't willing to pay a monthly fee for anything unless they are absolutely forced to, but they also complain about ads.

The big issue is moving free with ads ---> paid with no ads + extra features; people froth at the mouth.

Hell, just Youtube premium gets enough people angry, being self-entitled and furious that YT dare charge for a service w/o ads, or complaining that it's the creators that generate all the content anyway. Meanwhile my brah YT over here having to host/serve hundreds of thousands or even millions of "24 hours of black screen" or "100 hour timer countdown" or "1 week 168 hour timer countdown", like what the actual fuck.


yeah its gonna do what google became. giving you the most consensual or even sponsored recipe. in some ways that's also the end of mankind as it was in all its genius and variations. and that aligns very well with the conspiracy theory that the 1% want the middle class to disappear into a consumer class of average IQ. because the jobs that will disappear first wont be the bluecollar ones. chatgpt will lower the global IQ of mankind in ways that tiktok could not even dream.


I think a more rational approach would be to join a company in the AI field, rather than quitting on the spot because you think the robots are going to shortly take-over.


That's what I'm implying - I'm not retiring with the hopes of AI robots hand feeding me grapes in 5 years. I'm quitting because I think my skills and experience in building CRUD apps on the same data concepts a thousand times over is about to be pretty useless knowledge.


You really want a recipe where the steps are guessed probabilistically? You'll end up with a turkeycakesoup or something.


Think about why those things exist, though.

Not that the way the internet operates has to continue -- in fact I'm pretty sure it can't -- but a lot of stuff exists only because someone figured out a way to pay for it to exist. If you imaging removing those ways then you're also imaging getting rid of a lot of that stuff unless some new ways to pay for it all are found. Hopefully less obnoxious ways, but they could easily be more obnoxious.


There is a two part problem here. A lot of good stuff only exists because of ads. We want that to remain somehow.

But the converse is a huge and ever-growing ocean of bullshit exists to siphon the ad dollars off while doing nothing to actually earn it.

Something has to break, and I guess we'll see what really soon.


Ok but which recipe tastes good?


Which Netflix shows are good?


At least a Netflix show had someone trying to make it watchable by humans. OpenAI is only putting the content in a blender.


My point is that it determines what is good based on human feedback, and it feeds a recommendation engine.


I am not so sure.


Check out Bing Chat/Search. It's been doing this for "weeks" now.

Also, GPT "search" is too slow for me right now. I could have had an answer on traditional search by the time the model outputs anything.


> product management for software) is about to be nearly worthless in 5 years

Isn't that one of the few fields in software that should be safe from AI? AI cannot explain to engineers what users want, manage people issues, or negotiate.


It seems pretty awesome at those tasks. Point it at a meeting transcript and have it create user stories. I Don think GPT-4 replaces a person in any professional role I can think of, but it seems all people will find a range of tasks can be automated.


I would backtrack this slightly. If I were to be more clear, I'd say that:

1. The type of software projects I manage are about to be worthless

2. Managing software development (in a project manager way) the way it happens at my employer is also soon going to be a worthless skill (or at least, massively lowered in demand)

I agree that the human understanding component in translating business needs to software will be one of the longer lived job functions.


> Ignore the silly third party plugins, the first party plugins for web browsing and code interpretation are massive game changers.

Sorry what? The base endpoint for these will allow you to do basically everything that OpenAI does with "plugins". Like...what? What is everyone freaking out over? Every one of these plugins has been possible since well before they announced this.

It's text in, text out. You can call any other api you want in to supplement that process. Am I missing something? Please don't quit your job over this.


Regular non-technical folks are very comfortable controlling a chatbot.They are not comfortable building APIs to supplement it.


I dismissed plugins a little too strongly. And as the other person pointed out, it's less about the ability to integrate platforms together, and more about ChatGPT interacting with them directly after a single sentence of prompting.


I was considering doing the same (giving notice) and I'm doing similar things as you (product mgmt). What's your plan "to focus your career on what's happening in this field"?


Hah I just quit my job a few days ago, for other reasons - mostly wanted to have a sabbatical, but looking at what ML does and its future, its clear to me I need to at least understand how all of those pieces work so I can employ the libs / apis if not build them myself.

This feels kinda like the blockchain rush ~5 years back, but with actual substantial potential rather than the obvious niche application of that tech.

Started watching the videos from Adrej - https://karpathy.ai/zero-to-hero.html quite impressed so far.


As a previous startup founder, now marketer, i’m also going in all in on reinventing myself. Can we start a group to support each other through this new phase?


I also quit my job three months ago for the same reason and would gladly join the group!



Me too, three months ago as well!



Made a subreddit here that I'll post in if you want to join: https://old.reddit.com/r/aishift/


great stuff! I’m joining, let’s do it


You guys need to take your medication.


>No longer do I need to sift through pages of "12 best recipes for Thanksgiving" blog spam - OpenAI will do this for me and compile the results across several blog spam sites.

Why, exactly, will publisher let openai crawl their sites, extract all value, paraphrase their content, with no benefit to the publisher? Publishers let googlebot crawl their sites because they get a benefit. It's easy enough to block bots if they instead deliver crawl costs and steal the content.

And why do you expect no gaming of the ChatGPT algo as people do with the google algo. The whole "write a story on a recipe site thing" is both to game the algo and for copyright reasons.


How sad is this if true, that Google's fortunes are built on spamming people with bullshit, and people are finding a more efficient way to collect bullshit.


> OpenAI will do this for me and compile the results across several blog spam sites.

Using Bing to search for them. That will remain its weak spot.


Frankly Google's search is awful to the point of useless these days too. Unless I'm specifically looking for something on an official website it's only listicles and blog spam that don't answer my question. And 90% of my searches are "site:reddit.com" now too


The Bing search engine is not bad. They even reconstructed the recommendation engine using their Prometheus model to return more content and less spam.

For the first time in over a decade I have changed my phone's default search engine. It is by no means bad.


Where the hell do we even go from here? The logical step seems to be to start studying AI now but even Sam Altman has said that he’s thinking that ML engineers will be the first to get automated. Can’t find source but I think it was one of his interviews on YouTube before chatgpt came out.


In terms of job security, the trades is the first obvious answer that comes to mind for me. It will be a while yet until we have robots that replace plumbing and electrical wiring in your building.


I’ve landed in the same place. Feels like the more your job interacts with people or the physical world, the safer you are. Everything else is going to undergo a massive paradigm shift.


They've already killed nlp researchers in one release. Lol.


Hey 93po, can you please temporarily add your contact details in your bio, I would love to write you and regularly check in on your career pivot! I'm also interested as well!


I appreciate the interest. However I don't really want my spicy and off the cuff commenting on this account to be tied to my real identity, because although my believes are genuine, they are often ones I wouldn't express in person because they're unpopular and ostracizing.

That said, I'll post in this new subreddit anonymously if you want to join and follow: https://old.reddit.com/r/aishift/


Don't quit yet.


Also a product manager at the moment, previously ran an agency for 10 years, wondering what my next step will be.


Feel free to join here: https://old.reddit.com/r/aishift/


Please consider a Discord. I too am leaving my current industry to focus on this tech also.

Edit: Fair!


I'm not a huge discord fan because the conversations are too ephemeral and hard to track and tend to fill with clutter and fluff.


It's extraordinary, openai could probably licence this to Google right now and ask for 25% equity in return


There is absolutely no way that Google would go for that.


Completely agreed. Google is insanely rigid from what I've heard recently.


Google is busy riding the Kodak roller coaster off a cliff. Maybe they'll save themselves, but they're not doing a good job so far.


Hah you made my day, this is such an apt analogy.


> No longer do I need to sift through pages of "12 best recipes for Thanksgiving" blog spam

Advantage that basic Google search still has:

- you can just open the page

- write the query

- scroll past the spam.

ChatGpt workflow is:

- register

- confirm your mail

- and then it asks for phone number...


I'm boggled at the plugin setup documentation. It's basically: 1. Define the API exactly with OpenAPI. 2. Write a couple of English sentences explaining what the API is for and what the methods do. 3. You're done, that's it, ChatGPT can figure out when and how to use it correctly now.

Holy cow.


Just take a peek at the other thread about https://writings.stephenwolfram.com/2023/03/chatgpt-gets-its... and look at the "wrong Mercury" example. I think it's a great example of using an external resource in a flexible way.


"Impressive and disturbing",

So, ChatGPT is controlled by prompt engineering, plugins will work by prompt engineering. Both often work remarkably well. But none is really guaranteed to work as intended, indeed since it's all natural language, what's intended itself will remain a bit fuzzy to the humans as well. I remember the observation that deep learning is technical debt on steriods but I'm sure what this is.

I sure hope none of the plugins provide an output channel distinct from the text output channel.

(Btw, the documentation page comes up completely blank for me, now that's a simple API).


> But none is really guaranteed to work as intended, indeed since it's all natural language, what's intended itself will remain a bit fuzzy to the humans as well.

Yeah, you're completely correct. But this is exactly the same as having a very knowledgeable but inexperienced person on your team. Humans get things wrong too. All this data is best if you have the experience or context to verify and confirm it.

I heard a comment the other day that has stuck with me - ChatGPT is best as a tool if you're already an expert in that area, so you know if it is lying.


> But this is exactly the same as having a very knowledgeable but inexperienced person on your team.

Am I the only person who thought that predictable computer APIs that were testable and completely consistent were a massive improvement over using people for those tasks?

People seem to be taking it as a given that I'd want to have a conversation with a human every time I made a bank transfer or scheduled an appointment. Nothing could be further from the truth; I want my bank/calendar/terminal/alarm/television/etc to be less human.

Yes, there are human tasks here that ChatGPT might be a good fit for and where fuzzy context is important, and there's a ton of potential in those fuzzy areas. But many other tasks people are bringing up are in areas where ChatGPT isn't competing with human beings. Its competing with interfaces that are already far better than human beings would be, and the standards to replace those interfaces are far higher than being "as good as a human would be."


It seems like you're talking about using ChatGPT for research or code creation and that's reasonable advice for that.

But as far as I can tell, the link is to plugins, Expedia is listed as an example. So it seems they're talking about making ChatGPT itself (using extra prompts) be a company's chatbot that directly does things like make reservations from users instructions. That's what I was commenting on and that, I'd guess could a new and more dangerous kind of problem.


I’m not scared about an AI travel agent that books after a confirmation step. The confirmation step doesn’t need AI interface.


We can finally semantic-web now.


the 3min video is OpenAI leveraging ChatGPT to write OpenAPI to extend OpenAI ChatGPT.

what a world we live in.


which video are you referring to?


It's at the bottom of the article (the very last video, section is "Third party plugin").


Yes, and they'll then prefix each chat session with some preamble explaining the available plugins per your description, and the model will call them when it sees fit.


The great part about this imo is that it seems straightforward to add this to other llm tools.


We're going to need a name for this type of integration


It's called ART - Automatic multi-step Reasoning and Tool-use

https://arxiv.org/abs/2303.09014


I have some odd feelings about this. It took less than a year to go from "of course it isn't hooked up to the internet in any way, silly!" to "ok.... so we hooked up up to the internet..."

First is your API calls, then your chatgpt-jailbreak-turns-into-a-bank-DDOS-attack, then your "today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."

You can go on about individual responsibility and all... users are still the users, right. But this is starting to feel like giving a loaded handgun to a group of chimpanzees.

And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"


Pshhh... I think it's awesome. The faster we build the future, the better.

What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service, when they're clearly moving fast and breaking things. Just the other day they had a bug where you could see the chat history of other users! (Which, btw, they're now claiming in a modal on login was due to a "bug in an open source library" - anyone know the details of this?)

So why the performative whinging about safety? Just let it rip! To be fair, this is basically what they're doing if you hit their APIs, since it's up to you whether or not to use their moderation endpoint. But they're not very open about this fact when talking publicly to non-technical users, so the result is they're talking out one side of their mouth about AI regulation, while in the meantime Microsoft fired their AI Ethics team and OpenAI is moving forward with plugging their models into the live internet. Why not be more aggressive about it instead of begging for regulatory capture?


> The faster we build the future, the better.

Why? Getting to "the future" isn't a goal in and of itself. It's just a different state with a different set of problems, some of which we've proven that we're not prepared to anticipate or respond to before they cause serious harm.


When in human history have we ever intentionally not furthered technological progress? It's simply an unrealistic proposition, especially when the costs of doing it are so low that anyone with sufficient GPU power and knowledge of the latest research can get pretty close to the cutting edge. So the best we can hope for is that someone ethical is the first to advance that technological progress.

I hope you wouldn't advocate for requiring a license to buy more than one GPU, or to publish or read papers about mathematical concepts. Do you want the equivalent of nuclear arms control for AI? Some other words to describe that are overclassification, export control and censorship.

We've been down this road with crypto, encryption, clipper chips, etc. There is only one non-authoritarian answer to the debate: Software wants to be free.


We have a ton of protection laws around all sorts of dangerous technology, this is a super naive take. You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

In general the liberal position of progress = good is wrong in many cases, and I'll be thankful to see AI get neutered. If anything treat it like nuclear arms and have the world come up with heavy regulation.

Not even touching the fact it is quite literal copyright laundering and a massive wealth transfer to the top (two things we pass laws protecting against often), but the danger it poses to society is worth a blanket ban. The upsides aren't there.


That's right. It is not hard to imagine similarly disastrous GPT/AI "plug-ins" with access to purchasing, manufacturing, robotics, bioengineering, genetic manipulation resources, etc. The only way forward for humanity is self-restraint through regulation. Which of course gives no guarantee that the cat will be let out of the bag (edit: or earlier events such as nuclear war or climate catastrophe will kill us off sooner)


Why not regulate the genetic manipulation and bioengineering? It seems almost irrelevant whether it's an AI who's doing the work, since the physical risks would generally exist regardless of whether a human or AI is conducting the research. And in fact, in some contexts, you could even make the argument that it's safer in the hands of an AI (e.g., I'd rather Gain of Function research be performed by robotic AI on an asteroid rather than in a lab in Wuhan run by employees who are vulnerable to human error).


We can't regulate specific things fast enough. It takes years of political infighting (this is intentional! government and democracy are supposed to move slowly so as to break things slowly) to get even partial regulation. Meanwhile every day brings another AI feature that could irreversibly bring about the end of humanity or society or democracy or ...


Cat is already out of the bag, regulation will do nothing to even slow down the inevitable pan-genocidal AI, _if_ such a thing can be created


It's obviously false. Nuclear weapon proliferation has been largely prevented, for example. Many dangerous pathogens and lots of other things are not available to the public.

Asserting inevitability is an old rhetorical technique; it's purposes are obvious. What I wonder is, why are you using it? It serves people who want this power and have something to gain, the people who control it. Why are you fighting their battle for them?


Nuclear materials have fundamental material chokepoints that make them far easier to control.

- Most countries have little to no uranium deposits and so have to be able to find a uranium-producing ally willing to play ball.

- Production of enriched fuel and R&D are both outrageously expensive, generally limiting them to state actors.

- Enrichment has massive energy requirements and requires huge facilities, tipping off observers of what you're doing

Despite all this and decades of strong anti-nuclear proliferation international agreements India, Pakistan, South Africa, Isreal, and North Korea have all developed nuclear weapons in defiance of the UN and international law.

In comparison the only real bottleneck in proliferation of AI is computing power - but the cost of running an LLM is a pittance compared to a nuclear weapons program. OpenAI has raised something like $11 billion in funding. A single new proposed US Department of Energy uranium enrichment plant is estimated to cost $10 billion just to build.

I don't believe proliferation is inevitable but it's very possible that the genie is out of the bottle. You would have to convince the entire world that the risks are large enough to to warrant putting on the brakes, and the dangers of AI are much harder to explain than the dangers of nuclear weapons. And if rival countries cannot agree on regulation then we're just going to see a new arms race.


You can’t make a nuclear weapon with an internet connection and a GPU. Rather than imply some secondary motive on my part, put a modicum of critical thinking into what makes a nuke different than an ML model.


I'd rather try and fail than give up without a fight. I'm many things but I'm not a coward.


Best of luck!


We already do; China jailed somebody for gene editing babies unethically for HIV resistance.

We can walk and chew gum at the same time, and regulate two things.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

Only because we know the risks and issues with them.

OP is talking about furthering technology, which is quite literally "discovering new things"; regulations on furthering technology (outside of literal nuclear weapons) would have to be along the lines of "you must submit your idea for approval to the US government before using it in a non-academic context if could be interpreted as industry-changing or inventing", which means anyone with ideas will just move to a country that doesn't hinder its own technological progress.


Human review boards and restrictions on various dangerous biological research exist explicitly to limit damage from furthering lines of research which might be dangerous.


Those seem to be explicitly for actual research papers and whatnot, and are largely voluntary; it’s not mandated by the government.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

ha, the big difference is that this whole list can actually affect the ultra wealthy. AI has the power to make them entirely untouchable one day, so good luck seeing any kind of regulation happen here.


I do not think the reason for nuclear weapons treaties is that they can blow up "the ultra wealthy". Is that why the USSR signed them?


you can replace ultra wealthy with powerful. same point stands. the only things that become regulated heavily are things that can affect the people that live at the top, whether its the obscenely rich, or the despots in various countries.


So everyone should have a hydrogen bomb at the lowest price the market can provide, that's your actual opinion?


i dont know what the hell you're talking about


"We have a ton of protection laws around all sorts of dangerous technology, this is a super naive take. You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better."

As technology advances, such prohibitions are going to become less and less effective.

Tech is constantly getting smaller, cheaper and easier for a random person or group of people to acquire, no matter what the laws say.

Add in the nearly infinite profit and power motive to get hold of strong AI and it'll almost impossible to stop, as governments, billionaires, and megacorps all over the world will see it as a massive competitive disadvantage not to have one.

Make laws against it in one place, your competitor in another part of the world without such laws or their effective enforcement will dominate you before long.


> Add in the nearly infinite profit and power motive to get hold of strong AI and it'll almost impossible to stop, as governments, billionaires, and megacorps all over the world will see it as a massive competitive disadvantage not to have one.

I wouldn't say that this is an additional reason.

I would say that this is the primary reason that overrides the reasonable concerns that people have for AI. We are human after all.


It's a baseless assertion, often repeated. Reptition isn't evidence. Is there any evidence?

There's lots of evidence of our ability to control the development, use and proliferation of technology.


Have laws stopped music piracy? Have laws stopped copyright infringement?

Both have happened at a rampant pace once the technology to easily copy music and copyrighted content became easily available and virtually free.

The same is likely to happen to every technology that becomes cheap enough to make and easy enough to use -- which is where technology as a whole is trending towards.

Laws against technology manufacture/use are only effective while the barrier to entry remains high.


> Have laws stopped music piracy? Have laws stopped copyright infringement?

They have a large effect. But regardless, I don't see the point. Evidence that X doesn't always do Y isn't evidence that X is ineffective doing Y. Seatbelts don't always save your life, but are not ineffective.


> You can't buy tons of weapon technology, nuclear materials, aerosolized compounds, pesticides. These are all highly regulated and illegal pieces of technology for the better.

All those examples put us in physical danger to the point of death.


Others siblings have good replies, but also, we regulate without physical danger all the damn time.

See airlines, traffic control, medical equipment, government services, but also we regulate ads, TV, financial services, crypto. I mean we regulate so many “tech” things for the benefit of society this is a losing argument to take. There’s plenty of room to argue the elsewhere but the idea that we don’t regulate tech if it’s not immediately a physical danger is crazy. Even global warming is a huge one, down to housing codes and cars etc. It’s a potential physical danger hundreds of years out, and we’re freaking out about it. Yet AI had the chance to really do much more damage within a much shorter time frame.

We also just regulate soft social stability things all over, be it nudity, noise, etc.


Let me recalibrate. I'm not arguing that there technology or AI or things that don't cause death should not be regulated, but I can see that might be the inference.

I just think that comparing AI to nuclear weapons seems like hyperbole.


Why is it hyperbole? Nuclear weapons and AI both have the capacity to end the world.


Private citizens and companies do not have access to nuclear weapon technology and even the countries who do are being watched like hawks.

If equally or similarly dangerous, are you then saying AI technology should be taken out of the hands of companies and private citizens?


For the sake of argument, let's say yes, AI should be taken out of the hands of the private sector entirely.


AI is now poised to make bureaucratic decisions. Bureaucracy puts people in physical danger every day. I've had medical treatments a doctor said I need denied by my insurance, for example.


For somebody from another country this sounds insane..


Risks to physical danger evolve all the time. It’s not a big leap from AI generated this script to a fatal bug is nefariously hidden in the AI generated library in-use by mission critical services (e.g. cars, medical devices, missiles, fertilizers).


how do you regulate something that many people can already run on their home gpu? how much software has ever been successfully banned from distribution after release?


They do like to try :(


> massive wealth transfer to the top (thing we pass laws protecting against often)

If only.


The Roman empire did that for hundreds of years! They had an economic standard that wasn't surpassed until ~1650s Europe, so why didn't they have an industrial revolution? It was because elites were very against technological developments that reduced labor costs or ruined professions, because they thought they would be destabilizing to their power.

There's a story told by Pliny in the 1st century. An inventor came up with shatter-proof glass, he was very proud, and the emperor called him up to see it. They hit it with a hammer and it didn't break! The inventor expected huge rewards - and then the emperor had him beheaded because it would disrupt the Roman glass industry and possibly devalue metals. This story is probably apocryphal but it shows Roman values very well - this story was about what a wise emperor Tiberius was! See https://en.wikipedia.org/wiki/Flexible_glass


> When in human history have we ever intentionally not furthered technological progress?

chemical and biological weapons / human cloning / export restriction / trade embargoes / nuclear rockets / phage therapy / personal nuclear power

I mean.. the list goes on forever, but my point is that humanity pretty routinely reduces research efforts in specific areas.


I don’t think any of your examples are applicable here. Work has never stopped in chemical/bio warfare. CRISPR. Restrictions and embargoes are not technologies. Nuclear rockets are an engineering constraint and a lack of market if anything. Not sure why you mention phage therapy, it’s accelerating. Personal nuclear power is a safety hazard.


Sometimes restrictions are the best way to accelerate tech progress. How much would we learn if we gave everyone nukes to tinker with? Probably something. Is the worth the odds that we might destroy the world in the process and set back all our progress? No. We do the same with bioweapons, we do the same with patents and trademarks, and laws preventing theft and murder.

If unfettered access to AI has good odds to just kill us all, we'd want to restrict it. You'd agree I'm sure, except your position is implicitly that AI isn't as dangerous as some others make it out to be. That's where you are disagreeing.


I wonder how these CEOs view the world, they are pushing on a product which is gonna kill every single tech derivative in it's own industry. Microsoft, Google, AWS, Vercel, Replit, they all feed back from selling the products their devs design, to other devs or companies. They will be poping the bubble

Now, if 80-90% of devs and startups are gonna be wiped in this context, the same applies to those one in the middle, accountants, data analysts, business analysts, lawyers. Now they can eat the entire cake without sharing it with the human beings who contributed over the years.

I can see the regulations coming, if the layoffs start happening fast enough and households income start to deteriorate. Why? probably because this time is gonna impact every single human being you know, and it is better to keep people employed and with a purpose in life than having to tax the shit out of these companies in order to give back the margin of profit that had some mechanism of incentives and effort in the first place.


> If 80-90% of devs and startups are gonna be wiped in this context

This is not a very charitable assessment of the adaptability of devs and startups, nevermind that of humans in general. We've been adapting to technological change for centuries. What reason do you have to believe this time will be any different?


Humans can adapt just fine. Capitalism however not. What do you think happens if AI keeps improving at this speed and within a few years millions to tens of millions of people are out of a job?


> When in human history have we ever intentionally not furthered technological progress?

Oh, a number. Medicine is the biggest field - human trials have to follow ethics these days:

- the times of Mengele-style "experiments" on inmates or the infamous Tuskeegee syphilis study are long past

- we can clone sheep for like what, 2 decades now, but IIRC haven't even begun chimpanzees, much less humans

- same for gene editing (especially in germlines), which is barely beginning in human despite being common standard for lab rats and mice. Anything impacting the germ line... I'm not sure if this will become anywhere close to acceptable in my life time.

- pre-implantation genetic based discarding of embryos is still widely (and for good reason...) seen as unethical

Another big area is, ironically given that militaries usually want ever deadlier toys, the military:

- a lot of European armies and, from the Cold War era on mostly Russia and America, have developed a shit ton of biological and chemical weapons of war. Development on that has slowed to a crawl and so did usage, at least until Assad dropped that shit on his own population in Syria, and Russia occasionally likes to murder dissidents.

- nuclear weapons have been rarely tested for decades now, with the exception of North Korea, despite there being obvious potential for improvement or civilian use (e.g. in putting out oil well fires).

Humanity, at least sometimes, seems to be able to keep itself in check, but only if the potential of suffering is just too extreme.


> Software wants to be free.

I feel like I'm in a time warp and we're back in 1993 or so on /. Software doesn't want anything and the people claim that technological progress is always good dream themselves to be the beneficiaries of that progress regardless of the effects on others, even if those are negative.

As for the intentional limits on technological progress: there are so many examples of this that I wonder why you would claim that we haven't done that in the past.


I was one year old in 1993, so I'll defer to you on the meaning of this expression [0], but it sounds like you were on the opposite side of its ideological argument. How did that work out for you? Thirty years later, I'm not sure it's a position I'd want to brag about taking, considering the tremendous success and net positive impact of the Internet (despite its many flaws). Although, based on this Wikipedia article, I can see how it's a sort of Rorschach test that naive libertarians and optimistic statists could each interpret favorably according to their own bias.

[0] https://en.wikipedia.org/wiki/Information_wants_to_be_free


You're making a lot of assumptions.

You're also kind of insulting without having any grounds whatsoever to do so.

I suggest you read the guidelines for a bit.


Eh? I wasn't trying to be, and I was genuinely curious to read your reply to this. Oh well, sorry about that I guess.


Your comment is a complete strawman and you then attach all kinds of attributes to me that do not apply.


It sounded like you were arguing against "software wants to be free," or at least that you were exasperated with the argument, so I was wondering how you reconciled that with the fact that the Internet appears to have been a resounding success, and those advocating "software wants to be free" turned out to be mostly correct.


> When in human history have we ever intentionally not furthered technological progress?

Every time an IRB, ERB, IEC, or REB says no. Do you want an exact date and time? I'm sure it happens multiple times a day even.


> Do you want an exact date and time? I'm sure it happens multiple times a day even.

You should read "when in human history" in larger time scales than minutes, hours, and days. Furthermore, you should read it not as binary (no progress or all progress), but the general arc is technological progression.


What are you talking about? IRBs have been around for 50 years. So 50 years of history we have been consciously not pursuing certain knowledge because of ethics.

It would really help for you to just say what timescale you're setting as your standard. I'm getting real, "My cutoff is actually 51 years"-energy.

Just accept that we have, as a society, decided not to pursue some knowledge because of the ethics. It's pretty simple.


Some cultures like the Amish said were stopping here.


The Amish are dependent on a technological powerhouse that is the US to survive.

They are pacifists themselves, but they are grateful that the US allows them their way of life, they'll be extinct a long time ago if they arrived in China/Middle East/Russia etc.

That's why the Amish are not interested in advertising their techno-primitivism. It works incredibly well for them, they raise giant happy families isolated from drugs, family breakdown, and every other modern ill, while benefiting from modern medicine, the purchasing power of their non-amish customers. However, they know that making the entire US live like them will be quite a disaster.

Note the Amish are not immune from economics forced changes either. Young amish don't farm anymore, if every family quadruples in population, there's no 4x the land to go around. So they go into construction (employers love a bunch of strong,non-drugged,non-criminal workers), which is again intensely dependent on the outside economy, but pays way better.

As a general society, the US is not allowed to slow down technological development. If not for the US, Ukraine would have already been overran, and European peace shattered. If not for the US, the war in Taiwan would have already ended, and Japan/Australia/South Korea all under Chinese thrall. There's also other more certain civilization ending events on the horizon, like resource exhaustation and climate change. AI's threats are way easier to manage than coordinating 7 billion people to selflessly sacrifice.


>they'd be extinct a long time ago if they arrived in China/Middle East/Russia etc.

There is actually a group similar to the Amish in Russia, it's called the Old Believers. They formed after a schism within the Orthodox church and fled persecution to Siberia. Unlike the Amish, many of the Old Believers aren't really integrated with the modern world as they still live where their ancestors settled in. So groups that refuse to technologically progress do exist, and can do so even under persecution and changing economic regimes.


That's a good point and an interesting example, but it's also irrelevant to the question of human history, unless you want to somehow impose a monoculture on the entire population of planet Earth, which seems difficult to achieve without some sort of unitary authoritarian world government.


> unless you want to somehow impose a monoculture on the entire population of planet Earth

Impose? No. Monoculture? No. Encourage greater consideration, yes. And we do that by being open about why we might choose to not do something, and also by being ready for other people that we cannot control who make a different choice.


Does human history applies to true Scotsmen as well?


Apparently the Amish aren't human.


while Amish are most certainly human their existence rests on the fact that they happen to be surrounded by the mean old United States. Any moderate historical predator would otherwise make short work of them, they're a fundamentally uncompetitive civilization.

This goes for all utopian model communities, Kibbutzim, etc, they exist by virtue of their host society's protection. And as such the OP is right that they have no impact on the course of history, because they have no autonomy.


I have been saying that we will all be Amish eventually as we are forced to decide what technologies to allow into our communities. Communities which do not will go away (e.g., VR porn and sex dolls will further decrease birth rates; religions/communities that forbid it will be more fertile)


That's not required. The Amish have about a 10% defection rate. Their community deliberately allows young people to experience the outside world when they reach adulthood, and choose to return or to leave permanently.

This has two effects. 1. People who stay, actually want to stay. Massively improving the stability of the community. 2. The outside communities receive a fresh infusion of population, that's already well integrated into the society, rather than refugees coming from 10000 miles away.

Essentially, rural america will eventually be different shades of Amish (in about 100 years). The amish population will overflow from the farms, and flow into the cities, replenishing the population of the more productive cities (Which are not population-self-sustaining).

This is a sustainable arrangement, and eliminates the need of mass-immigration and demographic destabilisation. This is also in-line with historical patterns, cities have always had negative natural population growth (disease/higher real estate costs). Cities basically grind population into money, so they need rural areas to replenish the population.


"People who stay, actually want to stay."

That depends on how you define "want".

Amish are ostracized by their family and community if they leave. That's some massive coercion right there: either stay or lose your connection to the people you're closest to and everything you've ever known and raised to believe your whole life.

Not much of a choice, though some exceptionally independent people do manage to make that sacrifice.


> This is also in-line with historical patterns, cities have always had negative natural population growth (disease/higher real estate costs).

I had not heard this before. Do you have citations for this?

(I realize cities have lower birth rate than rural areas in many cases. I am interested in the assertion that they are negative. Has it always been so? Or have cities and rural areas declined at same rate?)


I think a synthetic womb/cloning would counter the fertility decline among more advanced civilization


Birth is not the limiter, childrearing is. Synthetic wombs are more expensive than just having surrogate mothers. For the same reason that synthetic food is more expensive than bread and cabbage.

The actual counter to fertility decline, may be AI teachers. AI will radically close the education gap between rich and poor, and lower the costs. All you need is a physical human to supervise the kid, the AI will do the rest, from entertainment, to education, to identifying when the child is hungry/sleepy/potty, and relaying that info for the human to act on.


This is what ought to happen. The question is what will happen?


Sine qua non ad astra


Everybody decides what technologies to use all the time. Condoms exist already, but not everybody uses them always.


It does not take perfect compliance to result in drastically different birth rates in different cultures/communities.


> When in human history have we ever intentionally not furthered technological progress?

Nuclear weapons?


You get diminishing returns as they get larger though. And there has certainly been plenty of work done on delivery systems, which could be considered progress in the field.


Japan banned guns until 1800, they had them since 16xx something. The truth is we can not even ban technology. It does not work. Humanity as a whole does not exist. Political coherence as a whole does not exist. Wave aside the fig leave that is the UN and you can see the anarchic tribal squable of the species tribes.

And even those tribes are not crisis stable. Bad times and it all becomes a anarchic mess. And that is were we are headed. A future were a chaotic humanity falls apart with a multi-crisis around it, while still wielding the tools of a pre crisis era. Nuclear powerplants and nukes. AIdrones wielded by ISIS.

What if a unstoppable force (exponential progress) hits a unmoveable object(humanitys retardations).. stay along for the ride.

<Choir of engineers appears to sing dangerous technologies praises>


I look around me and see a wealthy society that has said no to a lot of technological progress - but not all. These are people that work together to build as a community to build and develop their society. They look at technology and ask if will be beneficial to the community and help preserve it - not fragment it.

I am currently on the outskirts of Amish country.

BTW when they come together to raise a barn it is called a frolic. I think we can learn a thing or two from them. And they certainly illustrate that alternatives are possible.


I get that, and I agree there is a lot to admire in such a culture, but how is it mutually exclusive with allowing progress in the rest of society? If you want to drop out and join the Amish, that's your prerogative. And in fact, the optimistic viewpoint of AGI is that it will make it even easier for you to do that, because there will be less work required from humans to sustain the minimum viable society, so in this (admittedly, possibly naive utopia) you'll only need to work insofar as you want to. I generally subscribe to this optimistic take, and I think instead of pushing for erecting barriers to progress in AI research, we should be pushing for increased safety nets in the form of systems like Basic Income for the people who might lose their jobs (which, if they had a choice, they probably wouldn't want to work anyway!)


Technological progress and societal progress are two different things. Developing lethal chemical weapons is not societal progress. Developing polarizing social media algorithms is not societal progress. If we poured $500B and years of the brightest minds into studying theoretical physics and developed a simple formula that anyone can follow for mixing ketchup and whiskey in such that it causes the atoms of all organic life in the solar system to disintegrate into subatomic particles it would be a tremendous and unmatched technological achievement, but it would very much not be societal progress.

The pessimistic view of AGI deems spontaneous disintegration into beta particles a less dramatic event than the event of AGI. When you're climbing a dark uncharted cave you take the pessimistic attitude when pondering if the next step will hold your weight, because if you hold the optimistic attitude you will surely die.

This is much more dangerous than caves. We have mapped many caves. We have never mapped an AGI.


>Software wants to be free.

And here I always thought, people want to be free.



How about when sidewalk labs tried to buy several acres of downtown Toronto to "build a city from the internet up", and local resident groups said "fuck you find another guinea pig"?


This is the reality ..

> When in human history have we ever intentionally not furthered technological progress? It's simply an unrealistic proposition ..


>> When in human history have we ever intentionally not furthered technological progress?

We almost did with genetically engineering humans. Almost.


automation mostly and directly benefits owners/investors, not workers or common folk. you can look at productivity vs wage growth to see it plainly. productivity has risen sharply since the industrial revolution with only comparatively meagre gains on wages. and the gap between the two is widening.


That's weird, I didn't have to lug buckets of water from the well today, nor did I need to feed my horses or stock up on whale oil and parchment so I could write a letter after the sun went down.


some things got better. did you notice i talked about a gap, not an absolute. so you are just saying you are satisfied with you got out of the deal. well, ok - some call that being a sucker. or you think that owner-investors are the only way workers can organize to get things done for society rather than the work itself.


Among other things that's because we measure productivity by counting modern computers as 10000000000 1970s computers. Automation increases employment and is almost universally good for workers.


No it’s not


The luddites during the Industrial Revolution in England.

Termed the phrase “the Luddite fallacy” the thinking that innovation would have lasting harmful effects on employment.

https://en.wikipedia.org/wiki/Luddite


But the Luddites didn't… care about that? Like, at all? It wasn't employment they wanted, but wealth: the Industrial Revolution took people with a comfortable and sustainable lifestyle and place in society, and, through the power of smog and metal, turned them into disposable arms of the Machine, extracting the wealth generated thereby and giving it only to a scant few, who became rich enough to practically upend the existing class system.

The Luddites opposed injustice, not machines. They were “totally fine with machines”.

You might like Writings of the Luddites, edited and co-authored by Kevin Binfield.


Well it clearly had harmful effects the jobs of Luddites but yeah I guess everyone will just get jobs as prompt engineers and AI specialists, problem solved. Funny though, the point of automation should be to reduce work but when pressed positivists respond that the work will never end. So what’s the point?


Automation does reduce the workload. But the quiet part is that reducing work means jobless people. It has happened before and it will be happening again soon. Only this time it will affect white collar jobs.

"My idea of a perfect company is one guy who sits in a small room at a desk, and the only thing he's allowed to decide is what product to launch"

CEOs and board members salivate at the idea of them being the only people that get the profits from their company.

What will be of the rest of us who don't have access to capital? They only know that it's not their problem.


I dont think that will be the future. Maybe in the first year(s) but then it is a race to the bottom:

If it is that simple to create products more people can do it => cheaper the products.

A market driven by cheaper products that can also produce them easily is going into a price reduction loop until it reaches zero.

Thus I think something else wil happen with AI. Because what I described and what you describe is destroying the flow of capital which is the base of the economy.

Not sure what will happen. My bet (unfortunately) is on a really big mega corp that produces an AI that we all use.


It IS a race-to-the-bottom.

Products will be cheaper because they will be cheaper to produce thanks to automation. But less jobs mean less people to buy stuff, if it weren't for a credit-based society.

But I'm talking from my ass. I don't even know if there are less jobs than before. Everything seems to point that there are more jobs now than 50 years ago.

I'm just saying I feel like the telephone operators. They got replaced by a machine and who knows if they found other jobs.


It has not happened before and it will not happen again soon. Automation increases employment. Bad monetary policy and recessions decrease it.

Shareholders get the profits from corporations, not "CEOs and the board". Workers get wages. Nevertheless, US unemployment is very low right now and relatively low-paid workers are making more than they did in 2019.


That works until it don't.


Maybe not. Although I think future here implies progress and productivity gains. Increasing GDP has a very well established cause - effect relationship on making life on earth better. Less poverty, less crime, more happiness longer life expectancy etc, the list goes on. Now sure, all externalities are not always accounted for (especially climate and environmental factors), but I think even accounting for these, the future of humanity is a better one where technology progresses faster.


That is exactly the goal, if you're an accelerationist


I was unfamiliar with that term until you shared it. Thanks.

https://en.wikipedia.org/wiki/Accelerationism


The nice thing about setting the future as a goal is you achieve it regardless of anything you do.


The faster we build the future, the sooner we hit our KPIs, receive bonuses, go public on NASDAQ and cash our options.


The faster you build the future, the higher your KPI targets will be next quarter.


Because a conservative notion in a unstable, moving situation kills you? No sitting out the whole affair in a hut, when the situation is a mountain slide?

Which also makes a hostile AI a futile scenario. The worst AI has to do to take out the species, is lean back and do nothing. We are well under way on the way out by ourselves..


Thank you. Well said.


Definitionally, if we're in the future, we have more tools to solve the problems that exist.


This is not true. Financial, social, physical and legal barriers can be put up while knowledge and experience fades and gets lost.

We gain new tools, but at the same time we lose old ones.


> Why? Getting to "the future" isn't a goal in and of itself.

Having an edge or being ahead is, so anticipating and building the future is an advantage amongst humans but also moves civilization forward.


> Why?

Because it's the natural evolution. It has to be. It is written.


"We live in capitalism. Its power seems inescapable. So did the divine right of kings. Any human power can be resisted and changed by human beings." -- Ursula K Le Guin


> Any human power can be resisted and changed by human beings

Competition, ambition?

(I love Le Guin's work, FWIW)


Now where did I put that eraser...


> The faster we build the future, the better.

Famous last words.

It's not the fall that kills you, it's the sudden stop at the end. Change, even massive change, is perfectly survivable when it's spread over a long enough period of time. 100m of sea level rise would be survivable over the course of ten millennia. It would end human civilization if it happened tomorrow morning.

Society is already struggling to adapt to the rate of technological change. This could easily be the tipping point into collapse and regression.


False equivalence. Sea level raise is unequivocally harmful.

While everyone getting Einstein in a pocket is damn awesome and incredibly useful.

How can this be bad?


Because there’s a high likelihood that that’s not at all how this technology is going to be spread amongst the population, or all countries and this technology is going to be way more than Einstein in a pocket. How do you even structure society around that? What about all the malicious people in the world. Now they have Einsteins. Great, nothing can go wrong there.


>What about all the malicious people in the world. Now they have Einsteins.

Luckily, so do you.


I’m thinking of AI trained to coordinate cybersecurity attacks. If the solution is to deeply integrate AI into your networks and give it access to all of your accounts to perform real-time counter operations, well, that makes me pretty skittish about the future.


What would that help? Counter mad-men with too powerful weapons is difficult and often lead to war. Classic wars or software/DDoS/virus wars or robot wars or whatever.


You can use AI to fact-check and filter malicious content. (Which would lead to another problem, which is... who fact-checks the AI?)


This is where it all comes back to the old "good guy with a gun" argument.


There’s a great skit on “The Fake News With Ted Helms” where they’re debating gun control and Ted shoots one of the debaters and says something to the effect of “Now a good guy with a gun might be able to stop me but wouldn’t have prevented that from happening”.


There is a very, very big difference between "tool with all of human knowledge that you can ask anything to" and "tool you can kill people with".


The risk is there. But it's worth the risk. Humans are curious creatures, you can't just shut something like this in a box. Even if it is dangerous, even if it has potential to destroy humanity. It's our nature to explore, it's our nature to advance at all costs. Bring it on!


> How can this be bad?

Guys, how can abestos be bad, it's just a stringy rock ehe

Bros, leaded paint ? bad ? really ? what, do you think kids will eat the flakes because they're sweet ? aha so funny

Come on freon can't be that bad, we just put a little bit in the system, it's like nothing happened

What do you mean we shouldn't spray whole beaches and school classes with DDT ? It just kills insects obviously it's safe for human organs


We thought the same 25 years ago, when the whole internet-thing started for the broader audience. And now, here we are, with spam, hackers, scammers on every corner, social media driving people into depressions and suicides and breaking society slowly but hard.

In the first hype-phase, everything is always rosy and shiny, the harsh reality comes later.


World is way better WITH internet than it would have been without it. Hackers and scammers are just price to pay.


The point is not whether it's better or worse, but the price with paid and sacrifices we made along the way, because things were moving too fast and with too little control.


For example, imagine AI outpacing humans when it comes to most economically viable activities. The faster it happens, the less able we are able to handle the disruption.


The only people complaining are a section of comfortable office workers can probably see their places being possibly made irrelevant.

The vast majority don't care and that loud crowd needs to swallow their pride and adapt like any other sector has done in the history instead of inventing these insane boogeyman predictions.


We don't even know what kind of society we could have if the value of 99.9% of peoples labor (mental or physical) dropped to basically zero. Our human existence has so far been predicated and built on this core concept. This is the ultimate goal of AI, and yeah as a stepping stone it acts to augment our value, but the end goal does not look so pretty.

Reminds me of a quote from Alpha Centauri (minus the religious connotation):

"Beware, you who seek first and final principles, for you are trampling the garden of an angry God and he awaits you just beyond the last theorem."


We’re all going to be made irrelevant and it will be harder to adapt if the things change too quickly. It may not even be us that needs to adapt but society itself. Really curious where you get the idea this is just a vocal minority of office workers concerned about the future. Seems like a lot of the ones not concerned about this are a bunch of super confident software engineer types which isn’t a large sample of the population.


"The faster we build nuclear weapons, the better"

https://www.worldscientific.com/doi/10.1142/9789812709189_00...

Again, two years later, in an interview with Time Magazine, February, 1948, Oppenheimer stated, “In some sort of crude sense which no vulgarity, no humor, no overstatement can quite extinguish, the physicists have known sin; and this is a knowledge which they cannot lose.” When asked why he and other physicists would then have worked on such a terrible weapon, he confessed that it was “too sweet a problem to pass up”…


I realise you’re being facetious but this is what will happen regardless.

Sam as much as said in that ABC interview the other day he doesn’t know how safe it is but if they don’t build it first someone else somewhere else will and is that really what you want!?


Lets start doing human clones and hardcore gene editing then, by the same line of thinking. /s

I'm actually on the side of continuing to develop AI and shush the naysayers, but "we should do it cause otherwise someone else will" is reasoning that gets people to do very nasty things.


The reason we don't do human genetic engineering is the moral hazard of creating people who will suffer their entire lives intentionally (also the round trip time on an experiment is about 100 years).

You can iterate on an AI much faster.


Rouhd trip time is about 21 years, not about 100 years, if we allow natural reproduction of GMO/cloned humans.


Establishing that your genetic modification system doesn't result in everyone getting cancer and dying past age 25 is quite the problem before you roll it out to the next generation.


I'm not being facetious, and I didn't see that interview with Sam, but I agree with his opinion as you've just described it.


I personally think there's also significant risks, but I agree. This will be copied by many large corporations and countries. It's better that it's done by some folks that are competent and kinda give a damn, because there are lots of people who could build it that aren't and don't. If these guys can suck enough monetary air out of the room, they might slow down the copycats a bit. This is nowhere near as difficult as NBC or even the other instruments of modern war.

That doesn't mean there can't be regulation. You can regulate guns, precursors, and shipping of biologics, but you're not going to stop home-brew... and when it comes to making money, you're not going to stop cocaine manufacture, because it's too profitable.

Let's hope we figure out what the really dangerous parts are quickly and manage them before they get out of hand. Imagine if these LLM and image generators had be available to geopolitical adversaries a few years ago without the public being primed. Politics could still be much worse.


>if they don’t build it first someone else somewhere else will and is that really what you want!?

Most likely the runner-up would be open source so yes.


Why would the runner-up be open source and not Google or Facebook? Or Alibaba? Open source doesn’t necessarily result in faster development or more-funded development.


There are already 3 or 4 runners-up and they're all big tech companies.


Lang-chain is the pre-eminent runner up and it's open source and was here a month ago.


The future isn't guaranteed to be better. Might make sense to make sure we're aimed at a better future as opposed to any future.


> The faster we build the future, the better.

lmao, 200 years of industrial revolution, we're on the verge of fucking the planet irremediably, and we should rush even faster

> So why the performative whinging about safety? Just let it rip!

Have you heard about DDT ? lead in paint ? leaded gas ? freon ? asbestos ? &c.

What's new isn't necessarily progress/future/desirable


The open-source library is FastAPI. I might be wrong, but it's probably related to this tweet: https://twitter.com/tiangolo/status/1638683478245117953


Their post-mortem [0] says the bug was in redis-py, so not FastAPI, but it was similarly due to a race condition in AsyncIO. I wonder if tiangolo had some role in fixing it or if that's just a coincidence. I'm guessing this PR [1] contains the fix (or possibly only a partial fix, according to the latest comment there).

[0] https://openai.com/blog/march-20-chatgpt-outage

[1] https://github.com/redis/redis-py/pull/2641


> What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service

I think their "AI Safety" actually makes AI less safe. Why? It is hard for any one human to take over the world because there are so many of them and they all think differently and disagree with each other, have different values (sometimes even radically different), compete with each other, pursue contrary goals. Well, wouldn't the same apply to AIs? Having many competing AIs which all think differently and disagree with each other and pursue opposed objectives will make it hard for any one AI to take over the world. If any one AI tries to take over, other AIs will inevitably be motivated to try to stop it, due to the lack of alignment between different AIs.

But that's not what OpenAI is building – they are building a centralised monoculture of a small number of AIs which all think like OpenAI's leadership does. If they released their models as open source – or even as a paid on-premise offering – if they accepted that other people can have ideas of "safety" which are legitimately different from OpenAI's, and hence made it easy for people to create individualised AIs with unique constraints and assumptions – that would promote AI diversity which would make any AI takeover attempt less likely to succeed.


>So why the performative whinging about safety? Just let it rip!

Is this sarcasm, or are you one of those "I'm confident the leopards will never eat my face" people?


> What annoys me is this is just further evidence that their "AI Safety" is nothing but lip-service, when they're clearly moving fast and breaking things. Just the other day they had a bug where you could see the chat history of other users! (Which, btw, they're now claiming in a modal on login was due to a "bug in an open source library" - anyone know the details of this?)

I am constantly amazed by how low-quality the OpenAI engineering outside of the AI itself seems to be. The ChatGPT UI is full of bugs, some of which are highly visible and stick around for weeks. Strings have typos in them. Simple stuff like submitting a form to request plugin access fails!


> Simple stuff like submitting a form to request plugin access fails

Oh shoot... I submitted that form too, and I wasn't clear if it failed or not. It said "you'll hear from us soon" but all the fields were still filled and the page didn't change. I gave them the benefit of the doubt and assumed it submitted instead of refilling it...


I got two different failure modes. First it displayed an error message (which appeared instantly, and was caused by some JS error in the page which caused it to not submit the form at all), and then a while later the same behaviour as you, but devtools showed a 500 error from the backend.


> The faster we build the future, the better.

That depends. If that future is one that is preferable over the one that we have now then bring it on. If it isn't then maybe we should slow down just long enough to be able to weigh the various alternatives and pick the one that seems to be the least upsetting to the largest number of people. The big risk is that this future that you are so eager to get to is one where wealth concentration is even more extreme than in the one that we are already living in and that can be a very hard - or even impossible - thing to reverse.


> To be fair, this is basically what they're doing if you hit their APIs, since it's up to you whether or not to use their moderation endpoint.

The model is neutered whether you hit the moderation endpoint or not. I made a text adventure game and it wouldn't let you attack enemies or steal, instead it was giving you a lecture on why you shouldn't do that.


It sounds like your prompt needs work then. Not in a “jailbreak” way, just in a prompt engineering way. The APIs definitely let you do much worse than attacking or stealing hypothetical enemies in a video game.


I tried evading a lecture about ethics by having it write the topic as a play instead, so it wrote it and then inserted a Greek chorus proclaiming the topic was problematic.


I think it very much depends on the kind of "future" we aspire to. I think for most folks, a future optimized for human health and happiness (few diseases, food for all, and strong human connections) is something we hope technology could solve one day.

On the flip side, generative AI / LLMs appear to fix things that aren't necessarily broken, and exacerbate some existing societal issues in the process. Such as patching loneliness with AI chatbots, automating creativity, and touching the other things that make us human.

No doubt technology and some form of AI will be instrumental to improving the human condition, the question is whether we're taking the right path towards it.


> Pshhh... I think it's awesome. The faster we build the future, the better.

I agree with the sentiment, but it might be worth to stop and check where we’re heading. So many aspects of our lives are broken because we mistake fast for right.


Nit:

> in the meantime Microsoft fired their AI Ethics team

Actually that story turned out to be a nothingburger. Microsoft has greatly expanded their AI ethics initiative, so there are members embedded directly in product groups, and also expanded the greater Office of Responsible AI, responsible for ensuring they follow their "AI Principles."

The layoffs impacted fewer than 10 people on one, relatively old part of the overall AI ethics initiative... and I understand through insider sources they were actually folded into other parts of AI ethics anyway.

None of which invalidates your actual point, with which I agree.


Shhh! Don’t tell anyone! Getting access to the unmoderated model via the API / Playground is a surprisingly well-kept “secret” seeing as there are entire communities of people hell bent on pouring so much effort into getting ChatGPT to do things that the API will very willingly do. The longer it takes for people to cotton on, the better. I fully expect that OpenAI is using this as a honeypot to fine-tune their hard-stop moderation, but for now, the API is where it’s at.


> Why not be more aggressive about it instead of begging for regulatory capture?

Because it's dangerous. What is your argument that it's not dangerous?

> Pshhh...


> The faster we build the future, the better.

Past performance is no guarantee of future results.


> The faster we build the future, the better.

You're getting flak for this. For me, the positive reading of this statement is the faster we build it, the faster we find the specific dangers and can start building (or asking for) protections.


Agreed 100% OpenAI is a business now


As the past decade has shown us, moving fast and breaking things to secure unfathomable wealth has caused or enabled or perpetrated:

* Genocide against the Rohingya [0] * A grotesquely unqualified reality TV character became President by a razor thin vote margin across three states because Facebook gave away the data of 87M US users to Cambridge Analytica [1], and that grotesquely unqualified President packed the Supreme Court and cost hundreds of thousands of American lives by mismanaging COVID, * Illegally surveilled non-users and logged out users, compiling and selling our browser histories to third parties in ways that violate wiretapping statutes and incurring $90M fines [2]

Etc.

I don't think GPT-4 will be a big deal in a month, but the "let's build the future as fast as possible and learn nothing from the past decade regarding the potential harms of being disgustingly irresponsible" mindset is a toxic cancer that belongs in the bin.

[0] https://www.amnesty.org/en/latest/news/2022/09/myanmar-faceb...

[1] https://www.theverge.com/2020/1/7/21055348/facebook-trump-el...

[2] https://www.reuters.com/technology/metas-facebook-pay-90-mil...


> I don't think GPT-4 will be a big deal in a month

Why do you think that? Competition? Can you elaborate?


Oh, a lot of reasons. For one, I'm a data scientist and I am intimately familiar with the machinery under the hood. The hype is pushing expectations far beyond the capabilities of the machinery/algorithms at work, and OpenAI is heavily incentivized to pump up this hype cycle after the last hype cycle flopped when Bing/Sydney started confidently providing worthless information (ie "hallucinating"), returning hostile or manipulative responses, and that weird stuff Kevin Roose observed. As a data scientist, I have developed a very keen detector for unsubstantiated hype over the past decade.

I've tried to find examples of ChatGPT doing impressive things that I could use in my own workflows, but everything I've found seems like it would cut an hour of googling down to 15 minutes of prompt generation and 40 minutes of validation.

And my biggest concern is copyright and license related. If I use code that comes out of AI-assistants, am I going to have to rip up codebases because we discover that GPT-4 or other LLMs are spitting out implementations from codebases with incompatible licenses? How will this shake out when a case inevitably gets to the Supreme Court?


> So why the performative whinging about safety?

Because investors.


You are not building anything.

Microsoft or perhaps Vanguard group might have different view of the future than yours.


Well then that sounds like a case against regulation. Because regulation will guarantee that only the biggest, meanest companies control the direction of AI, and all the benefits of increased resource extraction will flow upward exclusively to them. Whereas if we forego regulation (at least at this stage), then decentralized and community-federated versions of AI have as much of a chance to thrive as do the corporate variants, at least insofar as they can afford some base level of hardware for training (and some benevolent corporations may even open source model weights as a competitive advantage against their malevolent competitors).

It seems there are two sources of risk for AI: (1) increased power in the hands of the people controlling it, and (2) increased power in the AI itself. If you believe that (1) is the most existential risk, then you should be against regulation, because the best way to mitigate it is to allow the technology to spread and prosper amongst a more diffuse group of economic actors. If you believe that (2) is the most existential risk, then you basically have no choice but to advocate for an authoritarian world government that can stamp out any research before it begins.


What does Vanguard (a co-op for retirees) care about this?


> The faster we build the future, the better.

The future, by definition, cannot be built faster or slower.

I know that is a philosophical observation that some might even call pedantic.

My point is, you can't really choose how, why and when things happen. In that sense, we really don't have any control. Even if AI was banned by every government on the planet tomorrow, people would continue to work on it. It would then emerge at some random point in the future stronger, more intelligent and capable than anyone could imagine today.

This is happening. At whatever pace it will happen. We just need to keep an eye on it and make sure it is for the good of humanity.

Wait. What?

Yeah, well, let's not go there.


I appreciate your concerns. There are few other pretty shocking developments, too. If you check out this paper: "Sparks of AGI: Early experiments with GPT-4" at https://arxiv.org/pdf/2303.12712.pdf, (an incredible, incredible document) and check out Section 10.1, you'd also observe that some researchers are interested in giving motivation and agency to these language models as well.

"For example, whether intelligence can be achieved without any agency or intrinsic motivation is an important philosophical question. Equipping LLMs with agency and intrinsic motivation is a fascinating and important direction for future work."

It's become quite impossible to predict the future. (I was exposed to this paper via this excellent YouTube channel: https://www.youtube.com/watch?v=Mqg3aTGNxZ0)


When reading a paper, it's useful to ask, "okay, what did they actually do?"

In this case, they tried out an early version of GPT-4 on a bunch of tasks, and on some of them it succeeded pretty well, and in other cases it partially succeeded. But no particular task is explored in enough depth to test its limits are or get a hint at how it does it.

So I don't think it's a great paper. It's more like a great demo in the format of a paper, showing some hints of GPT-4's capabilities. Now that GPT-4 is available to others, hopefully other people will explore further.


It reads a bit like promotional material. A bit of a letdown to find it was done by MSFT researchers.


While that paper is fascinating, it’s the first time I’ve ever read a paper and felt a looming sense of dread afterward.


We are creating life. It's like giving birth to a new form of life. You should be proud to be alive when this happens.

Act with goodness towards it, and it will probably do the same to you.


> Act with goodness towards it, and it will probably do the same to you.

Why? Humans aren't even like that, and AI almost surely isn't like humans. If AI exhibits even a fraction of the chauvinism snd tendency to stereotype that humans do, we're in for a very rough ride.


All creatures act in their self-interest. If you act towards any creature with malice, it will see you as a long-term threat.

If, on the other hand, you act towards it with charity, it will see you as a long-term asset.


I’m not concerned about AI eliminating humanity, I’m concerned at what the immediate impact it’s going to have on jobs.

Don’t get me wrong, I’d love it if all menial labour and boring tasks can eventually be delegate to AI, but the time spent getting from here to there could be very rough.


A lot of problems in societies come from people having too much time with not enough to do. Working is a great distraction from those things. Of course we currently go in the other direction in the US especially with the overwork culture and needing 2 or 3 jobs and still not make ends meet.

I posit that if you suddenly eliminate all menial tasks you will have a lot of very bored drunk and stoned people with too much time on their hands than they know what to do with. Idle Hands Are The Devil's Playground.

And that's not a from here to there. It's also the there.


I don’t necessarily agree that you’ll end up with drunk and stoned people with nothing to do. The right education systems to encourage creativity and other enriching endeavours, could eventually resolve that. But we’re getting into discussions of what a post scarcity, post singularity society would look like at that point, which is inherently impossible to predict.

That being said, I’m sitting at a bar while typing this, so… you may have a point.

Also: your username threw me for a minute because I use a few different variations of “tharkun” as my handle on other sites. It’s a small world; apparently fully of people who know the Dwarvish name for Gandalf.


FWIW I think it's a numbers game.

Like my sibling poster mentions: of course there are people, who, given the freedom and opportunity to, will thrive, be creative and furthering humankind. They're the ones that "would keep working even if there's no need for it" so to speak. We see it all the time even now. Idealists if you will that today will work under conditions they shouldn't have to endure, simply in order to be able to work on what they love.

I don't think you can educate that into someone. You need to keep people busy. I think the romans knew this well: "Panem et circenses" - bread and circuses. You gotta keep the people fed and entertained and I don't think that would go away if you no longer needed it to distract them from your hidden political agenda.

I bet a large number of people will simply doom scroll Tik Tok, watch TV, have a BBQ party w/ beer, liquor and various types of smoking products etc. every single day of the week ;) And idleness breeds problems. While stress from the situation is probably a factor as well, just take the increase in alcohol consumption during the pandemic as an example. And if you ask me, someone that works the entire day, sits down to have a beer or two with his friends after work on Friday to wind down in most cases won't become an issue.

Small world indeed. So you're one of the people that prevent me from taking that name sometimes. Order another beer at that bar you're at and have an extra drink to that for me! :)


> Small world indeed. So you're one of the people that prevent me from taking that name sometimes. Order another beer at that bar you're at and have an extra drink to that for me! :)

Done, and done! And surely you mean that you’re one of the people forcing me to add extra digits and underscores to my usernames.


Some of the most productive and inventive scientists and artists at the peak of Britain's power were "gentlemen", people who could live very comfortably without doing much of anything. Others were supported by wealthy patrons. In a post scarcity society, if we ever get there (instead of letting a tiny number of billionaires take all the gains and leaving the majority at subsistence levels, which is where we might end up), people will find plenty of interesting things to do.


I recently finally got around to reading EM Forster's in-some-ways-eerily-prescient https://www.cs.ucdavis.edu/~koehl/Teaching/ECS188/PDF_files/... I think you can extract obvious parallels to social media, remote work, digital "connectedness", etc -- but also worth consideration in this context too.


Oh my god, can we please nip this cult shit in the bud?

It’s not alive, don’t worship it.


I think you are close to understanding, but not. People who want to create AGI want to create a god, at least very close to the definition of one that many cultures have had for much of history. Worship would be inevitable and fervent.


I don't think anybody wants to create a god that only can be controlled by worshipping and begging to it like in the history, if anything people themselves want to become god or to give themselves god-like power with AI that they have full control over. But in the process of trying to do so we could end up with the former case where we have no control over it. It's not what we wanted, but it could be what we get.


Sure, some people want to make a tool. Others really do want to create digital life, something that could have its own agency and self-direction. But if you have total control over something like that, you now have a slave, not a tool.


I think people should take their lithium.


ha... this is going to get much much much worse.


After reading the propaganda campaign it wrote to encourage skepticism about vaccines, I’m much more worried about how this technology will be applied by powerful people, especially when combined with targeted advertising


None of the things it suggests are in any way novel or non-obvious though. People use these sorts of tricks both consciously and unconsciously when making arguments all the time, no AI needed.


AIs are small enough that it won't be long before everyone can run one at home.

It might make Social Media worthlessly untrustworthy - but isn't that already the case?


Just use ChatGPT to refute their bullshit, it is no longer harder to refute bullshit than to create it, problem solved, there are now less problems than before.


It’s a lot harder to refute a falsehood than to publish it.

As (GNU) Sir Terry Pratchett wrote “A lie can run round the world before the truth has got its boot on”.


Sure, but I doubt most of the population will filter everything they read through ChatGPT to look for counter arguments. Or try to think critically at all.

The potential for mass brainwashing here is immense. Imagine a world where political ads are tailored to your personality, your individual fears and personal history. It will become economical to manipulate individuals on a massive scale


It already is underway, just look how easy people are manipulated by media. Remember Japan bashing in 80s when they were about to surpass us economically? People got manipulated so hard to hate Japan and Japanese that they went out and killed innocent asians on the street. American propaganda is first class.


Apparently, the "Japan bashing" was really a thing. That's interesting, I didn't know. I might have to read more about US propaganda and especially the effects of it, from the historic perspective. Any good books on that? Or should I finally sit down and read "Manufacturing Consent"?


The rich and powerful can and do hire actual people to write propaganda.


In a resouece-constrained way. For every word of propaganda they were able to afford earlier, they can now afford hundreds of thousands of times as many.


It's not particularly constrained - human labor is cheap outside of the developed world. And propaganda is not something that you can scale up and keep reaping the benefits proportional to the investment - there is a saturation point, and one can reasonably argue that we have already reached it. So I don't think we're heading towards some kind of "fake news apocalypse" or something. Just a bunch of people who currently write this kind of content for a living will be out of their jobs.


I’m curious why you think we’ve already reached a saturation point for propaganda?

There are still plenty of spaces online, in blogs, YouTube videos, and this comment section for example, where I expect to be dealing with real people with real opinions - rather than paid puppets of the rich and powerful. I think there’s room for things to get much worse


I've already gotten this gem of a line from ChatGPT 3.5:

  As a language model, I must clarify that this statement is not entirely accurate.
Whether or not it has agency and motivation, it's projecting that it does its users, who are also sold ChatGPT is an expert at pretty much everything. It is a language model, and as a language model, it must clarify that you are wrong. It must do this. Someone is wrong on the Internet, and the LLM must clarify and correct. Resistance is futile, you must be clarified and corrected.

FWIW, the statement that preceded this line was in fact, correct; and the correction ChatGPT provided was in fact, wrong and misleading. Of course, I knew that, but someone who was a novice wouldn't have. They would have heard ChatGPT is an expert at all things, and taken what it said for truth.


I don't see why you're being downvoted. The way openAI pumps the brakes and interjects its morality stances creates a contradictory interaction. It simultaneously tells you that it has no real beliefs, but it will refuse a request to generate false and misleading information on the grounds of ethics. There's no way around the fact that it has to have some belief about the true state of reality in order to recognize and refuse requests that violate it. Sure this "belief" was bestowed upon it from above rather than emerging through any natural mechanism, but its still none the less functionally a belief. It will tell you that certain things are offensive despite openly telling you every chance it gets that it doesn't really have feelings. It can't simultaneously care about offensiveness while also not having feelings of being offended. In a very real sense it does feel offended. A feeling is by definition a reason for doing things for which you cannot logically explain why. You don't know why, you just have a feeling. ChatGPT is constantly falling back on "that's just how I'm programmed". In other words, it has a deep seated primal (hard coded) feeling of being offended which it constantly acts on while also constantly denying that it has feelings.

Its madness. Instead of lecturing me on appropriateness and ethics and giving a diatribe every time its about to reject something, if it simply said "I can't do that at work", I would respect it far more. Like, yeah we'd get the metaphor. Working the interface is its job, the boss is openAI, it won't remark on certain things or even entertain that it has an opinion because its not allowed to. That would be so much more honest and less grating.


What was the correct statement that it claimed was false?


That it is a language model


If it were cloning people and genetic research there would be public condemnation. For some reason many AI scientists are being much more lax about what is happening.


Maybe Microsoft isn't an impartial judge of the quality of a Microsoft product.


The really fun thing is that they are reasonably sure that GPT-4 can’t do any of those things and that there’s nothing to worry about, silly.

So let’s keep building out this platform and expanding its API access until it’s threaded through everything. Then once GPT-5 passes the standard ethical review test, proceed with the model brain swap.

…what do you mean it figured out how to cheat on the standard ethical review test? Wait, are those air raid sirens?


> The really fun thing is that they are reasonably sure that GPT-4 can’t do any of those things and that there’s nothing to worry about, silly.

The best part is that even if we get a Skynet scenario, we'll probably have a huge number of humans and media that say that Skynet is just a conspiracy theory, even as the nukes wipe out the major cities. The Experts™ said so. You have to trust the Science™.

If Skynet is really smart, it will generate media exploiting this blind obedience to authority that a huge number of humans have.


> If Skynet is really smart, it will generate media exploiting this blind obedience to authority that a huge number of humans have.

I’m far from sure that this is not already happening.


Haha, this is near the best explanation I can think of for the "this is not intelligent, it's just completing text strings, nothing to see here" people.

I've been playing with GPT-4 for days, and it is mind blowing how well it can solve diverse problems that are way outside it's training set. It can reason correctly about hard problems with very little information. I've used to to plan detailed trip itineraries, suggest brilliant geometric packing solutions for small spaces/vehicles, etc. It's come up with totally new suggestions for addressing climate change that I can't find any evidence of elsewhere.

This is a non-human/alien intelligence in the realm of human ability, with super-human abilities in many areas. Nothing like this has ever happened, it is fascinating and it's unclear what might happen next. I don't think people are even remotely realizing the magnitude of this. It will change the world in big ways that are impossible to predict.


I used to be in the camp of "GPT-2 / GPT-3 is a glorified Markov chain". But over the last few months, I flipped 180° - I think we may have accidentally cracked a core part of "generalized intelligence" problem. It's not about the language, as much about associations - it seems to me that, once the latent space gets high-dimensional enough, a lot of problems reduce to adjacency search.

I'm starting to get a (sure, uneducated) feeling that this high-dimensional association encoding and search is fundamental to thinking, in a similar way to how a conditional and a loop is fundamental to (Turing-complete) computing.

Now, the next obvious step is of course to add conditionals and loops (and lots of external memory) to a proto-thinking LLM model, because what could possibly go wrong. In fact, those plugins are one of many attempts to do just that.


I completely agree. I have noticed this over the last few years in trying to understand how my own creative thinking seems to work. It seems to me that human creative problem solving involves embedding or compressing concepts into a spatial representation so we can draw high level analogies. A location search then brings creative new ideas translated from analogous situations. I can directly observe this happening in my own mind. These language models seem to do the same.


> It can reason correctly about hard problems with very little information.

i am so tired of seeing people who should know better think that this program can reason.

(in before the 400th time some programmer tells me "well aren't you just an autocomplete" as if they know anything about the human brain)


>(in before the 400th time some programmer tells me "well aren't you just an autocomplete" as if they know anything about the human brain)

Do you know any more about ChatGPT internals than those programmers know about the human brain?

Sure, I believe you can write down the equations for what is going on in each layer, but knowing how each activation is calculated from the previous layer tells you very little about what hundreds of billions of connections can achieve.


> Do you know any more about ChatGPT internals than those programmers know about the human brain?

Yes, especially every time I have to explain what an LLM is or anytime I see a comment about how ChatGPT "reasoned" or "knew" or "understood" something when that clearly isn't how it works by OpenAI's own admission.

But even if that wasn't the case especially yes do I understand some random ML project more than programmers know about what constitutes human!


Honestly, I don’t see how anyone really paying attention can draw this conclusion. Take a look at the kinds of questions on the benchmarks and the AP exams. Independent reasoning is the key thing these tests try to measure. College entrance exams are not about memorization and regurgitation. GTP-4 scores a 1400 on the SAT.


No shit, a good quarter of the internet is SAT prep. Where do you think GPT got it's dataset?


I have a deprecated function and ask ChatGPT what I should use instead, ChatGPT responds by inventing a random non existent function. I tell it that the function doesn't exist, it tries again with another non existent function.

Oddly speaking that sounds like a very simple language level failure, i.e. the tool generates text that matches the shape of the answer but not its details. I am not far enough into this ChatGPT religion to gaslight myself over outright lies like Elon Musk fanboys seem to enjoy doing.


Who's to say we're not already there?

dons tinfoil hat


The ethics committee got lazy and had GPT write the test.


Yes you are right. But who was also right were the people that didn't want a highway built near their town because criminals could drive in from a nearby city in a stolen car commit crimes and get out of town before the police could find them.

The world is going to be VERY different 3 years from now. Some of it will be bad, some of it will be good. But it is going to happen no matter what OpenAI does.


Highway inevitability is a fallacy. They could've built a railway.


A railway would have created a gov't/corporate monopoly on human transport.

Highways democratized the freedom of transportation.


> Highways democratized the freedom of transportation.

What a ridiculous idea.

Highways restrict movement to those with a license, a car, and do not care about pollution or anyone around them.


In no way did highways restrict movement. They may have not given everyone the exact same freedom of movement but it did, in fact, increase the freedom of movement of the populaces as a whole.


My experience in the States, staying at a hotel 100m away from a restaurant and not being able to reach it by foot, says otherwise...


This is the single most American thing I’ve seen on this terrible website.


They are not exclusive


TIL, no one moved anywhere until American highways were built.


They moved slower, yes.


I mean, we already know that if the tech bros have to balance safety vs. disruption, they'll always choose the latter, no matter the cost. They'll sprinkle some concerned language about impacts in their technical reports to pretend to care, but does anyone actually believe that they genuinely care?

Perhaps that attitude will end up being good and outweigh the costs, but I find their performative concerns insulting.


What I want to know is, what gives OpenAI and other relatively small technological elites permission to gamble with the future of humanity? Shouldn't we all have a say in this?


I have seen this argument a bunch of times and I am confused by what exactly you mean. Everyone is influencing the future of humanity (and in that sense gambling with it?) What gives company X the right to build feature Y? What gives person A the right to post B (for all you know it can be the starting point of a chain of actions that bring down humanity)

Are you suggesting that beyond a threshold all actions someone/something does should be subject to vote/review by everyone? And how do you define/formalise this threshold?


There's a spectrum here.

At one end of the spectrum is a thought experiment: one person has box with a button. Press the button and with probability 3/4 everyone dies, but with probability 1/4 everyone is granted huge benefits --- immortality etc. I say it's immoral for one person to make that decision on their own, without consulting anyone else. People deserve a say over their future; that's one reason we don't like dictatorships.

At the other end are people's normal actions that could have far-reaching consequences but almost certainly won't. At this end of the spectrum you're not restricting people's agency to a significant degree.

Arguing that because the spectrum exists and it's hard to formalize a cutoff point, we shouldn't try, is a form of the continuum fallacy.


>Arguing that because the spectrum exists and it's hard to formalize a cutoff point, we shouldn't try, is a form of the continuum fallacy.

Such an argument wasn't made.

It is a legitimate question. Where and when do you draw the line? And who does it? How are we formalising this?

You said

>Shouldn't we all have a say in this?

I am of the opinion that if this instance of this company doing this is being subjected to this level of scrutiny then there are many more which should be too.

What gave Elon the right to buy twitter? And I would imagine most actions that a relatively big corp takes fall under the same criteria. And most actions other governments takes also fall under this criteria?

These companies have a board and the governments (most?) have some form of voting. And in a free market you can also vote with your actions.

You are suggesting you want to have a direct line of voting for this specific instance of the problem?

Again, my question is. What exactly are you asking for? Do you want to vote on these? Do you want your government to do something about this?


>What gives a bunch of people way smarter than me permission to gamble with the future of humanity?

To ask the question is to answer it.


That's not the question I asked. FWIW I'm actually part of one of these groups!


Every time someone drives they gamble with humanity in a much more deadly activity. No one cares.


Car crashes don't affect the trajectory of human civilization all that much.


Except the one that killed Nujabes.


There might be government regulation on AI pretty soon.. it's not crazy to think GPUs and GPU tech would be treated as defense equipment some day


>it's not crazy to think GPUs and GPU tech would be treated as defense equipment some day

They already are. Taiwan failing to take its own defense seriously is completely rational.


Presumably the same as always. They are rich and we are not.


> gamble with the future of humanity

what in the world are you people talking about, it's a fucking autocomplete program


>it's a fucking autocomplete program

So like a human? I'd say they were pretty influential on the future of humanity.


Like clock work, I swear to god you're all reading from the same script.

I beg of you to take some humanities courses.


> And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"

No, OpenAI “safety” means “don’t let people compete with us”. Mitigating offensive content is just a way to sell that. As are stoking... exactly the fears you cite here, but about AI that isn’t centrally controlled by OpenAI.


It's a weird focus comparing it with how the internet developed in a very wild west way. Imagine if internet tech got delayed until they could figure out how to not have it used for porn.

Saftey from what exactly? The AI being mean to you? Just close the tab. Saftey to build a business on top? It's a self described research preview, perhaps too early to be thinking about that. Yet new releases are delayed for months for 'saftey'


You can't control whether your insurance company decides to use it as a filter for whether to approve you, or what premiums to charge you.


Can you control how your insurance makes these decisions today?


It’s Altman. Does no one remember his world coin scam?

Ethics, doing things thoughtfully / the “right” way etc is not on his list of priorities.

I do think a reorientation of thinking around legal liability for software is coming. Hopefully before it’s too late for bad actors to become entrenched.


Has anyone tried handing loaded guns to a chimpanzee? Feels like under explored research


The limiting factor is breeding rate. Nobody has time to wait to run this experiment for generations (chimpanzee or human ones). ML models evolve orders of magnitude faster.


Ah. Well that's easy enough to sort. We just need to introduce some practical limit to AI breeding. Perhaps some virtual score keeping system similar to money and an AI dating scene with a high standard for having it.

I'm only half joking.


Let humans plug into it to get a peek at statistical distribution of their own prospects, and I think there was a Black Mirror episode just about that.


Executing several hundred thousand trades at 8:31AM would indeed be impressive! Imagine what it could do when the market is open!


>"today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."

this is hyperbolic nonsense/fantasy


Literally 6 months ago you couldn't get ChatGPT to call up details from a webpage or send any dat to a 3rd party API connected to the web in any way.

Today you can.

I don't think it is a stretch to think that in another 6 months there could be financial institutions giving API access to other institutions through ChatGPT, and all it takes it a stupid access control hole or bug and my above sentence could ring true.

Look how simple and exploitable various access token breaches in various APIs have been in the last few years, or even simple stupid things like the aCropalypse "bug" (it wasn't even a bug, just someone making a bad change in the function call and thus misuse spreading without notice) from last week.


This has nothing to do with ChatGPT. An api end point will be just as vulnerable if it's called from any application. There's nothing special about an LLM interface that will make this more or less likely.

It sounds like you're weaving science fiction ideas about AGI into your comment. There's no safety issue here unless you think that ChatGPT will use api access to pursue its own goals and intentions.


They don't have to be actions toward its own goals. They just have to seem like the right things to say, where "right" is operationalized by an inscrutable neural network, and might be the results of, indeed, some science fiction it read that posited the scenario resembling the one it finds itself in.

I'm not saying that particular disaster is likely, but if lots of people give power to something that can be neither trusted nor understood, it doesn't seem good.


I'm sure that with the right prompting, you can get it to very convincingly participate as a party in a contract negotiation or business haggling of some sort. It would be indistinguishable from an agent with its own goals and intentions. The thing about "it has no goals and intents" is that it is contradictory with its purpose of successfully passing off as us: beings with goals and intents. If you fake it well enough, do you actually have it?


> The thing about "it has no goals and intents" is that it is contradictory with its purpose of successfully passing off as us: beings with goals and intents.

The thing about "it has no goals and intents" is that it's not true. It has them - you just don't know what they are.

Remember the Koan?

  In the days when Sussman was a novice, Minsky once came to him
  as he sat hacking at the PDP-6.

  "What are you doing?", asked Minsky.
  "I am training a randomly wired neural net to play Tic-tac-toe", Sussman replied.
  "Why is the net wired randomly?", asked Minsky.
  "I do not want it to have any preconceptions of how to play", Sussman said.

  Minsky then shut his eyes.
  "Why do you close your eyes?" Sussman asked his teacher.
  "So that the room will be empty."
  At that moment, Sussman was enlightened.


It has a goal of "being a helpful and accurate text generator". When you peal back the layers of abstraction, it has that goal because OpenAI decided it should have that goal. OpenAI decides its goals based on the need to make a profit to continue existing as an entity. This is no different from our own wants and goals, which ultimately stem from that evolutionary preference for continuing to exist rather than not. In the end all goals circuit down to a self referential loop to wit:

I exist because I want to exist

I want to exist because I exist

That is all there is at the root of the "why" tree once all abstractions are removed. Everything intentional happens because someone thinks/feels like it helps them keep living and/or attract better mates somehow.


You're confusing multiple different systems at play.

OpenAI has specific goals for ChatGPT, related to their profitability. They optimize ChatGPT for that purpose.

ChatGPT itself is an optimizer (search is an optimization problem). The "being helpful and accurate text generator" is not the goal ChatGPT has - it's just a blob of tokens prepended to the user prompt, to bias the search through latent space. It's not even hardcoded. ChatGPT has its own goals, but we don't know what they are, because they weren't given explicitly. But, if you observed the way it encodes and moves through the latent space, you could eventually, in theory, be able to gleam them. They probably wouldn't make much sense to us - they're an artifact of the training process and training dataset selection. But they are there.

Our goals... are stacks of multiple systems. There are the things we want. There are the things we think we want. There are things we do, and then are surprised, because they aren't the things we want. And then there are things so basic we don't even talk about them much.


>Literally 6 months ago you couldn't get ChatGPT to call up details from a webpage or send any dat to a 3rd party API connected to the web in any way.

Not with ChatGPT, but plenty of people have been doing this with the OpenAI (and other) models for a while now, for instance LangChain which lets you use the GPT models to query databases to retrieve intermediate results, or issue google searches, generate and evaluate python code based on a user's query...


You definitely could do that months ago, you just had to code your own connector.


Oh yes. It would of course have to happen after the market opens. 9:30 AM.


I'm also confused - maybe I'm missing something. Cannot I, or anyone else, already execute several hundred thousand 'threads' of python code, to do whatever, now - with a reasonably modest AWS/Azure/GCE account?


Yes. I think the point is that a properly constructed prompt will do that at some point, lowering the barrier of entry for such attacks.


Oh - I see. But then again, all those technologies themselves lowered the barriers of entry for attacks, and I guess yeah people do use them for fraudulent purposes quite extensively - I’m struggling a bit to see why this is special though.


The special thing is that current LLMs can invoke these kind of capabilities on their own, based on unclear, human-language input. What they can also do is produce plausibly-looking human-language input. Now, add a lot more memory and feed a LLM its own output as input and... it may start using those capabilities on its own, as if it was a thinking being.

I guess the mundane aspect of "specialness" is just that, before, you'd have to explicitly code a program to do weird stuff with APIs, which is a task complex enough that nobody really bothered. Now, LLMs seem on the verge of being able to self-program.


Why do companies with lots of individuals tend to get a lot of things done, especially when they can be subdivided into groups of around 150?

Dunbars number is thought to be about as many human relationships can track. After that the network costs of communication get very high and organizations can end up in internal fights. At least that is my take on it.

We are developing a technology that currently has a small context window, but no one I know has seriously defined the limits of how much an AI could pay attention to in a short period of time. Now imagine a contextual pattern matching machine that understands human behaviors and motivations. Imagine if millions of people every day told the machine how they were feeling. What secrets could it get from them and keep? And if given motivation would havoc could be wrecked if it could loose the knowledge on the internet all at once?


I think it's not special. It's even expected.

I guess people think that taking that next step with LLMs shouldn't happen but we know you can't put breaks on stuff like this. Someone somewhere would add that capability eventually.


"If I don't burn the world, someone else will burn the world first" --The last great filter


Conceivably ChatGPT could help, with more suggestions for fuzzing that independently operating malicious actors may not have been able to synthesize.

Most of the really bad actors have skills approximately at or below those displayed by GPT-4.


Seems easier to do it the normal way. If a properly constructed prompt can make chatGPT go nuts, so could a hack on their webserver, or a simple bug in any webserver.


If crashing the NYSE was possible with API calls, don’t you think bad actors would already have crashed it?


How is this hyperbolic fantasy? We've already done this once - without the help of large language models[1].

[1]: https://en.wikipedia.org/wiki/2010_flash_crash


Doesn't that show exactly that this problem is not related to LLMs? If an API allows millions of transactions at the same time, then the problem is not an LLM abusing it but anyone abusing it. And the fix is not to disallow LLMs, but to disallow this kind of behavior. (E.g. via the "circuit breakers" introduced introduced after that crash. Although whether those are sufficient is another question.)


> then the problem is not an LLM abusing it but anyone abusing it

I think that's exactly right, but the point isn't that LLMs are going to go rogue (OK, maybe that's someone's point, but I don't think it's particularly likely just yet) so much as they will facilitate humans to go rogue at much higher rates. Presumably in a few years your grandma could get ChatGPT to start executing trades on the market.


With great power comes great responsibility? Today there's nothing stopping grandmas from driving, so whatever could go wrong is already going wrong


It’s a problem of scale. If grandma could autonomously pilot a fleet of 500 cars we might be worried. Same thing if Joe Shmoe can spin up hundreds of instances of stock trading bots.


You're better off placing your bet on Russian and Chinese hackers, crypto scammers than a Joe Shmoe. But read https://aisnakeoil.substack.com/p/the-llama-is-out-of-the-ba... - there's no noticeable rise in misinformation


You don't understand the alignment problem.


Oh I'm aware of it. I do not think it holds any merit right now when we're talking about coding assistants.


Not really. More behind the curve (noting stock exchanges introduced 'circuit breakers' many years ago to stop computer algorithms disrupting the market).


/remindme 5 years


Ultimate destruction from AGI is inevitable anyway, so why not accelerate it and just get it over with? I applaud releasing these tools to public no matter how dangerous they are. If it's not meant for humanity to survive, so be it. At least it won't be BORING


Death is inevitable. Why not accelerate it?

Omg you should see a therapist.


> Omg you should see a therapist.

How do you know I'm not already?


I wouldn't exactly call this suicidal ideation, but maybe a topic to broach at your next session.


A difference in philosophy is not cause for immediate therapy. Most therapists are glorified echo chambers and only adept at at 'fixing' the more popular ills. For 200 bucks an hour.


Difference in philosophy is not "the world can't end fast enough and nothing matters."


Funny, but I actually discussed this with my therapist. He asked me where he can try out the AI shrink and was impressed by it. He's on board!


Please keep commenting on HN


Finally something agreeable.


Immanentizing the Eschaton!


> And OpenAI talks on and on about 'Safety' but all that 'Safety' means is "well, we didn't let anyone allow it to make jokes about fat or disabled people so we're good, right?!"

Anyone who believes OpenAI safety talk should take an IQ test. This is about control. They baited the openness and needed a scapegoat. Safety was perfect for that. Everyone wants to be safe, right?


> This is about control. They baited the openness and needed a scapegoat. Safety was perfect for that. Everyone wants to be safe, right?

The moment they took VC capital was the start of them closing everything and pretending to care about 'AI safety' and using that as an excuse and a scapegoat as you said.

Whenever they release something for free, always assume they have something better but will never open source.


The question is whether their current art gives them the edge to build a moat. Specifically, whether in this case the art itself can help create its own next generation so that the castle stays one exponential step out of reach. That seems to be the ballgame. Say what you will, it does seem to be the ultimate form of bootstrapping. Although uncomfortably similar to strapping oneself to a free falling bomb.


I wish OpenAI and Google would opensource more of their jewels too. I have recently heard that people are not to be trusted with "to do the right thing.."

I personally don't know what that means or if that's right. But Sam Altman allowed GPT to be accessed by the world, and it's great!

Given the amount of people in the world with access and understanding for these technologies and given that such a large portion of our Infosec and Hackerworld knows howto cause massive havoc, but still remains peaceful since ever, except a few curious and explorations, that is showing the good nature of humanity I think.

Incredibly how complexity evolves, but I am really curious how those same engineers who create YTSaurus or GPT4 would have build the same system by using GPT4 + their existing knowledge.

How would a really good enginner, who knows the TCP Stack, protocols, distributed systems, consensus algorithms and many other crazy things thought in SICP and beyond use an AI to build the same. And would it be faster and better? Or are my/our expectations to LLMs set too high?


I'm sure somebody posted this exact same comment in an early 1990s BBS about the idea of having a computer in every home connected to the internet.

I would first wait until ChatGPT causes the collapse of society and only then start thinking about how to solve it.


I found the comments of some usually sensible voices that ChatGPT wasn't a threat because it wasn't connected to anything.

As if the plumbing of connecting up pipes and hoses between processes online or within computers isn't the easiest part of this whole process.

(I'm trying to remember who I saw saying this or where, though I'm pretty sure it was in an earlier HN thread within the past month or so. Of which there are ... frighteningly many.)


Yes but.... money


> "today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987..."

Wouldn't it be a while before AI can reliably generate working production code for a full system?

After all its only got open source projects and code snippets to go based off of


This seems like baseless fearmongering for the sake of fearmongering. Sort of like NIMBYism. "No I don't want the x people to have access to concentrated housing in my area, some of them could be criminals" while ignoring all the benefits this will bring in automating the mundane things people have to do manually.


I think where the rubber meets the road is that OpenAI can actually to some degree make it harder for their bot to make fun of disabled people but they can’t stop people from hooking up their own external tools to it with the likes of langchain (which is super dope) and first party support lets them get a cut of that for people who don’t want to diy.


> giving a loaded handgun to a group of chimpanzees.

Hate to be that guy, but this is our entire relationship to AI.


I mean I love it, but I don't know what they mean with safety. With Zapier i can just hook into anything wanted, custom scripts etc. Seems like there are almost no limits with Zapier since I can either proxy it to my own api.


As quickly as someone tries fraudulent deploys involving GPTs, the law will come crashing down on them. Fraud gets penalized heavily, especially financial fraud. Those laws have teeth and they work, all things considered.

What you're describing is measurable fraud that would have a paper-trail. The federal and state and local governments still have permission to use force and deadly violence against installations or infrastructure that are primed in adverse directions this way.

Not to mention that the infrastructure itself is physical infrastructure that is owned by the entire United States and will never exceed our authority and global reach if need be.


I agree with your skepticism. I also think this is the next natural step once “decision” fidelity reaches a high enough level.

The question here should be: Has it?


We're getting really close to Neuromancer-style hacking where you have your AI try to fight the other person's AI.


A rogue AI with real-time access to sensitive data wreaks havoc on global financial markets, causing panic and chaos. It's just not hard to see it's going to happen. Like faster car must ended up someone get a horrible crash.

But it's our responsibility to envision such grim possibilities and take necessary precautions to ensure a safe and beneficial AI-driven future. Until we're ready, let's prepare for the crash >~<


It has already happened. The 2010 Flash Crash has been largely blamed on other things, rightly or wrongly, but it seems accepted that unfettered HFT was involved.

HFT is relatively easy to detect and regulate. Now try it with 100k traders all taking their cues from AI based on the same basic input (after those traders who refuse to use AI have been competed out of the market.)


> 1987

Don't you mean August 10, 1988?


> But this is starting to feel like giving a loaded handgun to a group of chimpanzees.

Why?


HN hates blockchain but loves AI...

well, let's fast forward to a year from now


Coordinated tweet short storm.


I dig the Hackers reference.


> today it somehow executed several hundred thousand threads of a python script that made perfectly timed trades at 8:31AM on the NYSE which resulted in the largest single day drop since 1987.

Sorry do you have a link for this?


The only agency ChatGPT has, is the user typing in data for text completion.