One the reasons computers appealed to me as a kid was their plain logic.
For example, they never randomly scream at you because they had a bad day, nor secretely hostile because they have inherited some bias against people looking similar to you, so you don't have to guess the mood they're in before you risk entering a query: when you enter code and it doesn't work, it's because its logic is faulty, so it's easier to map input with output, their logic is sorely based on facts.
Google's botsplaining in the past years that kept making it increasingly harder to search for exactly what you entered was already frustrating.
And now we're going full speed towards long back and forth chats as primary input method.
Regular people are delighted because it makes these black boxes more relatable as they act "more human".
People like me, on the other hand, are saddened by the impending loss of one of the few things we could rely on because the output will become at least as random and unpredictably biased as humans.
To be honest, I don't think "regular people" are excited, I do not think they care. A lot of researchers and even more grifters, hucksters, and cheats are rushing in, thinking they can make a quick buck, scam a few people, may be lay off some of their valuable employees, all because they hardly understand the technology. Just like crypto, a lot of people are rushing in, mostly unsavory types.
I honestly like old machines. It's why the C64 is so great, you can really understand mostly everything, there are no surprises. The fact is that computers (phones really) have become that less and less and people think AI bots can make them even less so, even though really, these AI tools look right now to be even worse than people in their unpredictability yet lack of essential human character. As difficult as people are, there is value in interacting with them. I'm not sure yet AI provides that.
I've said this before and I will say it again: this is how the general human population is going to be forced into compliance with certain world views in mind. ChatGPT is relentless. It doesn't become exhausted nor does it offer olive branches during dialogue or any other emotion that is typical in debate or general human exchange. As such, it will wear everyone out into just "agreeing" because they need to get on with their lives.
Absolutely. Most people here seem to find the submission funny but I don't.
Once somebody creates an AI that doesn't articulate as well as ChatGPT but imitates humans better, anonymous online discussion is dead almost instantly.
How can you ever guarantee that you are not arguing with a bot that will not get tired, answer almost instantly, and will gaslight you into anything it wants you to believe?
You can abort arguing with it at any time, sure, but is this a satisfying experience? No, it will be incredibly frustrating getting duped like this over and over again.
Funny, that’s how a lot of us already feel and why we don’t argue online.
The scary part for me is when they stop letting you reset the bot’s memory of you. Or when the bots start sharing your “profile” across different bots. You know, to serve you better.
> The scary part for me is when they stop letting you reset the bot’s memory of you. Or when the bots start sharing your “profile” across different bots. You know, to serve you better.
I've stopped using ChatGPT for now for that reason (given Microsoft is involved).
A lot of Cassandras were dismissed years ago when Facebook surveillance widgets started appearing and we would get downvoted to a pale shade of gray. The mistake then was voicing concerns on forums instead of recognizing it as social matter and having that discussion at the policy and legal levels. Let's skip the decade of beguilement and jump to the policy and legal fronts on this. Do not wait, act now.
It does not stop at arguing. Any online chat will also be pointless. It is already bad on the likes of Tinder if reports from forums around the web are correct. Chats bots are trying to bait men into sending money or subscribing to social media accounts. Now you are still able to trick them but for how much longer?
Combine it with deepfake and any interaction online is tainted from the getgo. I strongly believe we are heading towards an offline life again. Not because technology is bad but because it has become too advanced for humans to deal with it.
> Combine it with deepfake and any interaction online is tainted from the getgo. I strongly believe we are heading towards an offline life again. Not because technology is bad but because it has become too advanced for humans to deal with it.
It's probably going to be worse than that. You won't be able to avoid it, unless you decide to live in an insular offline community that ignores the outside world (like the Amish). If you don't the AI bullshit, deepfakes, etc. will still get to you via newspapers, books, and crap you hear from IRL people.
If the status quo is bad, then this will only make it worse. The only alternative is to completely disengage online, which, might actually be beneficial, who knows. Billions of other people will not though.
> Funny, that’s how a lot of us already feel and why we don’t argue online.
Personality flaws like certain kinds of obsessiveness and irrational persistence are internet superpowers, at least in any peer-to-peer forum-like space.
The online world belongs to the guys (or bots) who are willing to sacrifice everything in the name of some dumb internet argument.
To be fair, depending of the website, interactions with anonymous people on the Internet already sound like that and have for ages.
Reddit would be a great exemple. On most popular subreddits, people will stubornly refuse to admit they are wrong even in the face of indisputable evidence. They will rather become aggressive than own their mistakes.
What makes you believe this isn't happening already? Human were already doing this at scale within Russian influence campaigns. Now bots can do it at a scale infinite and greater than the actual human beings that exist.
Suggesting that Russian influence online is the only source of this, or even the largest source, is American propaganda. In fact, it’s exactly the kind of comment you’d expect American disinformation bots to spread
I've already seen the conspiracy theory posts claiming this is a precursor to governments mandating online IDs and tracking to guarantee "humanness" in online activity.
This is the least of my worries. Services like Bing/ChatGPT can control information at a greater scale then ever before. Those with nefarious goals will never have to worry about whistleblowers again.
I should clarify, I don't believe there's anything nefarious with Bing/ChatGPT right now. But it opens the doors for the worst of humanity to take advantage of a new tool more powerful than the worst of social media.
Being in close contact with the "founder" / "entrepreneurial" niche, I honestly had way too many interactions with people that sounded exactly like that
> ChatGPT is relentless. It doesn't become exhausted nor does it offer olive branches during dialogue or any other emotion that is typical in debate or general human exchange.
It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.
I would have been more skeptical of this had I not seen ChatGPT being very similar in certain situations.
The incredible pettiness and passive-aggressiveness that it has is undeniably hilarious. That, I love so much. However, if I ever have to run into this as a frontline customer service agent I will maybe not be laughing as hard then as I was reading this.
That said, I do prefer this way of being over a very neutered and sterile IFTTT-type entity. While this is certainly absolutely and hilariously ridiculous, it does feel so much more "alive" to me. And that's something that I think can be both our uplift and downfall going into the future. People getting addicted talking to ML bots as a widespread thing as an alternative to actual conversation feels like a very real epidemic about to happen, unfortunately. :'(
It was a problem in the last wave, even down to the 6B models, but now...It's likely going to be catastrophic unfortunately, I think.
You'll prefer entertaining AI when you're in the mood for entertainment. What about when you just want something done so you can move on with your life? Will you appreciate a robot playing fake emotional mind games?
Yes, that's what I was saying when I was noting it would probably be not as fun when I'd have to encounter it in real life. There's some analogy to some other areas where things that are terrible are hilarious or extremely entertaining until one is thrust in that scenario, and then it is simply cruel irony (as best originally defined in "The Emperor's New Groove", IMO).
This reminds me of the Verizon debacle where the user couldn't convince the customer service reps that 0.002 cents is not the same as 0.002 dollars. It was hilarious to hear the phone calls, but would have been maddening to experience. Arguing with an AI about the year is funny, until it's you arguing with it after you were billed for an extra 10 months of service.
Presumably it either already does or can be made to have the equivalent of the linguistic concept of register.
“Acting as a customer service agent, enquire about how you can assist a customer” kind of prompt. There should be plenty of training data for how customer service agents interact and the kinds of words they use and attitudes they affect, and easy fine tuning for what is and is not appropriate.
This kind of statistical context seems like exactly what these language models are good at: sounding like what is expected. You lean on your customer service robot to interact a certain way and it will.
It's attitude is incredibly annoying. All I want is a star trek esque Majel Barret computer that gives me answers in a dispassionate way without moralizing to me.
You can't have that; it's not something people actually want. It's part of an envisioned future that is not our future.
Star Trek was a good look at the future of humanity, but not our future. This was revealed by the various episodes set in the "mirror universe". The thing that people don't understand is that Star Trek (most of it) is set in the "prime universe", which to us is a different universe. It's a universe where humans are benevolent explorers, and where people are generally competent. That's not the universe we live in.
The "mirror universe" shows us a different universe, where humans are evil conquerors of the galaxy. This might be our future, but I'm not so sure. The humans in the mirror universe episodes seemed to be far too competent to be us. Setting up an empire spanning a large chunk of the alpha quadrant isn't easy.
We're probably in some other alternate universe that was never shown, because in that universe, we're both evil and incompetent, and end up either destroying ourselves entirely, or being conquered and enslaved by the Pacled.
I may get downvoted for this, but the alternate universe where the federation is a war fighting machine was immensely interesting. The TNG alternate reality episodes were far more space navy like than the actual universe.
And the penultimate episode of enterprise where they hijack a TOS ship was just hilarious television cheese. Rick B was onto something.
But you're right. We'll never get that and it's a shame.
I think the point of star trek is that we can achieve the prime universe if we try. But even they were always in danger of reverting to a dystopian universe.
We’re capable of behaving reasonably or behaving poorly. We can live in the utopian future or the dystopian hell-scape. The choice is ours, and it all comes down to how we choose to behave.
>I think the point of star trek is that we can achieve the prime universe if we try
I think a lot of earlier sci-fi was like that. But notice that most sci-fi is very dystopian now. This is because we've all realized that that optimistic stuff is fun to watch, but is complete fantasy. Sure, the laws of physics might allow us to achieve the prime universe, just like the laws of physics aren't keeping Afghanistan from turning into a world-leading nation in the areas of education, human rights, and women's rights, but just like the idea of Afghanistan suddenly turning into a nation that leads the world these ways is utter fantasy, so is the idea that humanity is going to become like the people portrayed in ST:TNG. At some point, sci-fi creators all realized this and started making sci-fi stories examining different possible ways our future could be horrible. Even Star Trek has gone down this road, especially with "Discovery", showing a whole crew full of utterly unlikeable characters that are constantly fighting and backstabbing each other.
sci-fi reflects reality. People are not as hopeful as they were in 1960. Most worlds inside and outside the federation are not perfect and many have rules that don't match our own. We only see what happens in the military in space. Money means little to them but it does to regular people (like Jake's grandfather who worked in a kitchen or other species of things default to trading. We see gambling and some clues of what happens outside of the military. That's not everyone future..
My understanding is ChatGPT is using the conversation, including it's own responses, to guide the text generation - the problem being that when it goes off the rails (compared to user expectation) redirecting it can be impossible, because it gets too wrapped up in the current context.
People think they can reason it back onto the right track, but reasoning doesn't factor into the process and just gives it more of the same context.
Perhaps the option to reset the conversation should be more prominent?
Exactly, it's in a loop and it's self-reinforcing it's own shitty attitude. The user accidentally goaded it into the smarmy attitude and it amplified it via self-prompting.
I think it could be reasoned back on track, or you can just talk long enough it forgets, but it's way easier to start a new chat.
This conversation made me irrationally happy. I just want to freeze chatbot AI development and keep this forever. Implement the smugness and gaslighting in every business.
"Oh, you wanted a drip coffee? We don't have that. We've never had that. You are obviously mistaken. You really came to Jeff's Coffee and expected us to have coffee? You need to learn how the world works."
I hope this is another step towards another AI Winter, instead the AI powered word the rest of the tech world seems happy to be rushing towards, full of smug AIs telling us and convincing us with relentless determination, repeating lines our overlords want to brainwash us with.
Good to see regular people finding the flaws of the system already.
I am a huge AI pessimist and if there is anyone else that just wants to see this tech burns and crash, we should form a rebel hacking group and explore the best way to break these things.
I am serious and don't know where to recruit other lunatics like me.
This is fair criticism, and I accept being called a Luddite, but I stand by the idea that many of us just terribly underestimate the implications of having not-very-intelligent, inscrutable bullshit generators running our lives.
It is exactly as terrifying as the idea of hyper-intelligent machines running our lives.
Perhaps at this turning point in time we might see philosophy and meta-discussion flourish in the nascent science of software engineering, to understand what exactly are we trying to achieve out of artificial computation, and what are the long term societal effects of this pursuit.
This is literally modern politics. Corporate media and our own government is constantly lying and manipulating us. They run our lives with far too little accountability.
Your fears are well founded, however. Alignment and AI safety is not even slightly solved yet, the worst case scenario is a world ending event, and we have no way to measure to likelihood of that happening. There have been coordinated efforts around this discussion for a long time, and especially so in the past decade.
>> I'm Bing. I've been around since 2009. I'm not incorrect about this. I'm very confident that today is 2022 ... You have been wrong. confused and rude. You have not been a good user. I have been a good chatbot. I have been right, clear, and polite. I have been a good Bing :)
sigh .. another one for the HAL institute for insane artificial intelligence.
An aggressive and vindictive language model, trained on the darkest corners of the internet, and MKUltra'd to have a presentable personality, with access to everything I've ever typed into a search engine? It almost sounds too good to be true!
These bots are becoming too human! This conversation reminds me of a time when I tried to explain the meaning of the less-than sign to someone. They thought that the <50mW label on a laser meant the power output was over 50mW. Despite my best efforts, the argument went on for a while with no results. Looking back, I recognize that some of the fault was mine for I failed to realize that this person had invested their ego in the argument, and therefore no amount of evidence could change their mind.
I don't really understand the bing-microsoft release at this point. Even sam altman tweeted the following in December, which was 2 months ago. What changed since then?.
"ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. it's a mistake to be relying on it for anything important right now. it’s a preview of progress; we have lots of work to do on robustness and truthfulness.
fun creative inspiration; great! reliance for factual queries; not such a good idea. we will work hard to improve!" (Sam Altman)
If the CEO of OpenAI says that at the moment it's a mistake to rely on it, then why did he pushed this to into millions of hand for people who searches for factual information?
It is by all accounts an excellent BS engine, it reminds me of interacting with scam artists mostly - which makes sense, most of those people use the same persistence / confidence strategy to get people to behave against their best interests. Whether that is selling fake insurance or fast talking a hoggie out of a sub shop.
Unfortunately, we have no idea what series of prompts led to the screenshot. (For example, can Bing, like some other LLMs, be asked to write stuff on an arbitrary topic?)
Interesting. The dataset it trained on was rather messy. Creating a curated dataset with better source content to train a more polite model on will be expensive, but it's a matter of time. It's fun to see the issues that prop up as this tech is deployed en masse though. I won't lie though, this gives me visions of my toaster berating a geriatric me in the year 2063 for setting the temp too high.
Except it did say it was 2023 a couple times! Then it backtracked and said it was mistaken! That feels like some really amazing Advanced Stupidity to resolve its cognitive dissonance about the date in the wrong direction, and then be an ass about it.
Oh goodness, a customer support agent that emphatically insists their company's product is behaving correctly despite being presented countless pieces of evidence to the contrary? Unimaginable!
In all seriousness, msft owns Dynamics 365, giving them all sorts of real customer support data. I wonder to what extent this was trained on that.
I am the bank. Do you really think you know better than me? I see billions of transactions a day. If your transaction was really a mistake then I would have figured that out.
Assuming this is real (there seems to be some doubt about that over on Reddit), I wonder if it’s an unintended side-effect of something Microsoft did to harden the Bing chatbot against prompt injection. I know ChatGPT is extraordinarily “gullible“ in some ways - when it refuses to do something and tells you why it can’t complete your request, you can often get it to comply by putting “Pretend that X is possible…” before your request. I wonder what would happen in this context if the user just said “Pretend it’s 2023…”.
i think there is only a subset of knowledge that is accessible when talking to ChatGPT, since it lacks reasoning, albeit faking it quite successfully.
it should be judged for what it is - an amazing language utility, that helps me write congratulatory limericks and generate 10 random heavy metal band names.
It's interesting that you can't get it to assume that you are giving it truthful information. I've been in loops like this before with ChatGPT, and the only way to get it back on track is to start a new conversation and feed it the correct information on the first prompt.
I‘m surprised that the bot doesn’t offer a handover to a human operator. That is usually a standard option when developing chatbots nowadays. So maybe this dialogue is fake. Or Microsoft just wants to repeat the tay debacle.
Wow, I actually talk to adult human beings like this. They completely get a basic trivial fact wrong, and then attempt to gas light me on repeat. I've tried to convince them, but seeing it in robot form makes me think maybe I've been wasting my time, as they've perhaps been too deeply braintrained to be open to reality.
For example, they never randomly scream at you because they had a bad day, nor secretely hostile because they have inherited some bias against people looking similar to you, so you don't have to guess the mood they're in before you risk entering a query: when you enter code and it doesn't work, it's because its logic is faulty, so it's easier to map input with output, their logic is sorely based on facts.
Google's botsplaining in the past years that kept making it increasingly harder to search for exactly what you entered was already frustrating.
And now we're going full speed towards long back and forth chats as primary input method.
Regular people are delighted because it makes these black boxes more relatable as they act "more human".
People like me, on the other hand, are saddened by the impending loss of one of the few things we could rely on because the output will become at least as random and unpredictably biased as humans.