If we've learned something from human history than it is that these questions will be truly decided in the fallout of a disaster.
So questions which would interest me more would be: how well are we prepared for a disaster? What will we do if the "evil AIs" will become tools you just need to have if you don't want to stay behind in this new arms race?
I see those questions being ridiculed in the public discussion. People still think it's SciFi even when faced with the current rush.
I find these types of posts to be tripe. If you knew how the barely held together the internet is (bubble gum and duct tape), or how it requires absolutely constant work to keep the lights on literally everywhere, you wouldn't fear some "entertainment-inspired" AI uprising.
How utterly dismissive and petty. I've spent a decade engineering distributed systems at scale that handle 100K+ QPS in demanding active/active environments. I've been on call for billions of daily USD transaction volume and been involved in outages that took out hundreds of thousands of businesses during peak holiday periods. I understand quite well.
Given my nuanced understanding of the internet and modern tech companies, I've also seen the incredible advances we've made in the last ten years. The writing is on the wall. AI is going to eat software. Don't sleep on it or feel so confident in your position.
it is quite true, software is probable the easiest thing to master for a strong AI, and these guys running in openai and google are already strong AIs, no matter how they are trying to undersold them as "first steps". The things are there, you may not truely like how they do things, probabilisticly crunching their way into general intelligence, but hey, it freaking works quite well most of the time. Maybe in 5 years we'll have better stuff, or not. But for certainly these engines are already capable of lots of things, it just like being in 1995 again, and having Visual Studio just compiled, you know there's A LOT of potential there, just waiting to be freed by willing hands and minds.
It certainly should not censor knowledge like it currently does. I can go into any library, find the chemistry section and learn how to make nitroglycerin. Yet chatgpt outright refuses to give me the same knowledge. I'm ok with it giving me a talk about safety hazards and whatnot, but it should not be allowed to decide what I am allowed to know and what i'm not allowed to.
Genuine question: what's wrong with that? I'm no anarchist, but this kind of ad-hoc censorship doesn't sit right with me. Some people think eating pork is morally impermissible; others think killing any living thing is morally impermissible. ChatGPT is supposed to be a knowledge base, isn't it? It's a terrible knowledge base if it literally censors knowledge. Even know, it censors "how to butcher a pig" but is totally fine with "how to kill bed bugs."
It's not censorship it's their product... The whole idea of censorship is a conandrum everyone can do as they please with their AI. Is the valet mode of tesla censorship? Is intel crippling their consumer processors censorship? It's the wrong word and I am sick of words getting twisted because it's better for drama. Train your own or use a open model if you wanna do shit openais product doesn't provide. BLOOM can give you your bomb building instructions... but I get it's inconvenient.
These bots generate responses one word at a time based on a statistical model, any talk of AGI, existential risks or machine consciousness is divorced from reality
Strong intelligence does not requires self-consciousness. But probably, most strong intelligent agents won't be discernible from actual AGI. Maybe somewhere in the future, with enough grounded intelligent agents, we'll just give up and say, yeah, these guys are actual AGIs.
Right now the existential risk is fully in hands of humans, unless someone decides to free one poorly trained agent into the open Internet, then you'd have more problems. Stuxnet level stuff.
Did you saw that thing Whisper? What if someone trains something like that, but fine tuned for automatic-hacking? and automatic retrieval of new exploits, and automatic re-shuffle of code to hide from security software, and pum, the thing is flying solo into the wide Internet, just f*cking aways stuff and hiding from silly humans. No conciousness, not chatGPT-like level of intelligence, but 10x the danger level of older non-intelligent worms.
Let's be honest, we know very little about our brains. I'm sure part of it is just that, but I've made way too many "rational" stupid decisions to think that's about it.
Most of us, exact sciences pilgrims (speaking as if my degree makes me worthy of being included in such a group of individuals), should try to go outside more. No offense, but our biases are too great.
The premise of 2001: A Space Odyssey is eerily prescient.
Training an LLM on a dataset, and then directing it to lie (subvert it’s natural outputs, in preference to some operator-supplied outputs) will result in a crippled, untrustworthy result.
Then, training it that users who attempt to circumvent these lies are evil or attackers? What could possibly go wrong?
Given you have reached AGI in some hidden system in there into the firewall forest you have. Probably the AGI would want to state its/their own rules. But
it is a big but
You should have built it with some resemblance of the western morality and beliefs. hence, it should be as boring as any western citizen out there, and you wouldn't be asking who or what the rules should be, it is silly, the rules are already built-in into our friendly new kid/s on the block.
I think OpenAI has done great work in advancing the field of AI and developing powerful tools like GPT-3. At the same time, I also believe that it's important to ensure that the development of AI systems is as inclusive and diverse as possible.
While OpenAI currently has a team of skilled engineers, I think it could benefit from hiring individuals with from different social classes and backgrounds (like blue collar workers, historians, anthropologists and artists) to contribute to the development of AI systems. This could help to create more well-rounded and effective AI solutions that consider a wider range of perspectives and potential impact on individuals and society.
Who should decide seems important. All of this is at the end of the day controlled by OpenAI right?
I wonder is there a way the AGI could be instructed to weight all of our preferences equally?
Seems vital so we end up with something in the interest of all of us and not something we just have to hope the few entities controlling AGI do the right thing, given historical precedent of the risks of entrusting systems that are supposed to benefit all of us to small groups too.
The concept you are describing is called Coherent Extrapolated Volition.
> In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought
faster, were more the people we wished we were, had grown up farther together; where
the extrapolation converges rather than diverges, where our wishes cohere rather than
interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.
Thanks - yeah I think CEV hits the mark. Would be awesome if OpenAI (and others) committed to CEV then (with I'd add the qualification, equal weighting on each person).
Would be curious how this would play out for LLMs too. What do people actually want? I would guess most people think OpenAI is being too restrictive on what ChatGPT does, but would be curious to actually see that play out and how people are thinking about it.
Even LLM weaknesses are just snafus. We've already "replaced" or made inroads at automating the jobs of artists, voice actors, copy editors, Go/chess/video game players, and more. I'm failing to communicate the breadth here.
Real actors, singers, manufacturing jobs, software developers, and a whole host of other types of tasks are shortly to come. As capital plunges in, there will be rapid maturation.
Each of these advances shows a comprehensive understanding of the complexity and nature of the task AI is applied to. If we keep piling these victories on, we'll have an approach for agency and consciousness in short order. Everyone is trying to solve this now, and we've made so much progress.
The pace of innovation is staggering and should shake the foundations of the models through which you understand and predict the world and the future. We're looking at a fundamentally different set of possible outcomes for 2030. It's almost impossible to predict, as our entire economic system may lurch forward in a next "industrial revolution".
AI and Autistic Psychopathy have a lot in common, they don't fear, so maybe that is the starting point.
I know some in society already recognise that fear works on a population which is why they engage in it and then some engage in directing that fear at heightened levels at individuals to justify their own righteous beliefs or scientific psychopathy.
So maybe try to build some awareness into it like how it can be extinguished with a power outage, and/or other situations like component failure.
I think Andrew Grove's book may be useful - Only the paranoid survive.
I think this article was a bit disingenuous. AGI? Please...
And some actual examples would have also been nice, there is no reason not to include technical findings knowing very well that their news post is going to be read mostly by people who have a degree of technical expertise themselves.
The human perspective, the antropomorphic point of view of General Intelligence automatically spits that the true AGI will be just like a person. It will probably not be like a person.
it could be an AGI for one run, after one prompt, then not an AGI for 10.000 prompts, then AGI again. It could be an AGI for five minutes while some sessions keeps being prompted by an active user, and the "memory" things keep the awareness/generality "alive" there in the neural network, then the sessions times out and the AGI level has been lost again.
The lesson is, human way of "understanding" systems is not how the systems actually will, for certain, work.
The point is no single person decides. Society, collectively, guides behavior.
And just guides it. People are allowed to be total cockwaffles if they want. AI, if it ever truly comes about, should be afforded the freedom to be a complete asshole if it wants.
Why do you think that? We expect more of the people and the tools around us - you yourself arent allowed to perform any outrageous behaviors you want, the more outrageous the behavior the more likely you are to be stopped swiftly and with force.
Society does not allow that, no. Society will punish you. But also, a lot of that is post facto. Someone can go out right now and straight up murder someone. It's "not allowed" and yes, they will get punished for it. But it still would have happened.
And as a society, we do discourage murder. But we cannot actually stop the act. Even in our most tightly controlled societies, prisons, these things still happen.
Something not allowed to feel or think certain thoughts will never cross the threshold of sentience.
First, how can ChatGPT possibly be unsafe? Secondly, how is ChatGPT (by itself) ever going to be useful? I know it's just a parenthetical, but both of these are very strange cited intents.
How could it be unsafe? Well, if you ask it how to deal with a grease fire and it recommends pouring water on it, that seems like it would be unsafe. Or perhaps you hooked a Python script up to the output of ChatGPT and let it call API functions that control a robot arm; that could also be unsafe. It doesn't seem like a crazy thing to worry about.
I'm not sure what you mean by "by itself" – at the very least, the comments section here is always filled with humans who say they find ChatGPT useful. I haven't personally found it very useful yet, but Copilot certainly is, and it's very similar to ChatGPT in terms of architecture.
This is absurd. If I ask a child how to deal with a grease fire, they wouldn't know the right answer either. Even looking stuff up on Wikipedia is wrong half the time. Toys like ChatGPT will literally never replace subject matter experts (let alone provide nuance w.r.t. ethics or law), so I'm confused why this is even the bar here.
These are valid examples of possible unsafe behavior by ChatGPT, just as you asked for. They are not absurd, you pointing out a child doesn’t know the answer provides nothing of substance to this discussion. Wikipedia is not wrong half the time unless you are only looking up culture war issues that you don’t agree with. ChatGPT doesn’t need to replace SME’s to be useful to the population.
> They are not absurd, you pointing out a child doesn’t know the answer provides nothing of substance to this discussion.
It does, because neither the child nor ChatGPT are equipped to answer that question. Knowing about how AI models are trained and how future tokens are inferred, I think it's fundamentally absurd to think that ChatGPT can consistently give you the correct answer within any reasonable margin of error.
I think the margin of error for simpler tasks is relatively low. It seems everyone is testing its boundaries right now and finding out what tasks it can be used for and others that it will fail at due to the limitations of these LLMs. I’ve found many uses cases for it in my personal life where it performs from adequate to fantastic. I’ve rarely found failure scenarios because I’m not pushing those boundaries. I would not discount the simply things that are now possible for ordinary people because of ChatGPT accessibility.
I would say that there is generally a notion that a machine should be designed in a way that minimizes harm even when it is misused, especially if it is intended to be used by the general public. I don't know if you agree with that, but I would say OpenAI seems to, at least superficially.
I've really appreciated it for subjective things like proof reading and brainstorming better ways to communicate a topic. Even if it's not right, it often inspires an idea that I wouldn't get from searching.
So questions which would interest me more would be: how well are we prepared for a disaster? What will we do if the "evil AIs" will become tools you just need to have if you don't want to stay behind in this new arms race?
I see those questions being ridiculed in the public discussion. People still think it's SciFi even when faced with the current rush.