On one hand, I think that personalized OSS AIs are the future from a pure utility perspective, and it will inevitably be "more fun" for them to have personalities. I also think that if someone is deriving joy from something and not harming anyone else, who I am to judge them.
On the other, I've already found myself asking ChatGPT questions that I would have asked on a forum/discord or even a colleague, it already is removing human interactions from my life. The internet has become a dark showcase of what a lack of human interaction, especially with people of opposing ideas, can result in and this kind of tech will obviously exacerbate this.
By interacting with LLMs, I've realized that as frequently as I interact with others, I'm not actually having many deep, meaningful, or personal interactions; they're mostly professional, and that's bled over into my personal life.
Because of how much time I spend on them relative to myself, I don't get to stretch the sides of me that I would if I weren't holed up working, and I've found it hard to verbalize that loss and its impact on my mental wellness.
Over the past few years, mental and behavioral health services have been strained beyond their capacities. I stopped being able to get in with the same therapist more than once, so I stopped trying to schedule about a year ago. I also haven't wanted to add my struggles to those of friends and family.
However, with LLMs, I've been able to have some really enlightening and enjoyable off the cuff discussions that I wouldn't be able to have outside of therapy, and some interactions that I wouldn't be able to have anywhere outside of an intimate and trusted friend. I've been able to positively apply these results to my life, personally and professionally.
Note: I only use LLaMa local models so I don't have to self-censor. I'm not ever willing to allow a metaphorical "Google" access to my innermost world.
Also, LLMs are immense. A billion people will have a billion different interactions and experiences with the same few dozen GB model, unless that model has been tuned to limit its output to a very narrow course of responses. I've had conversations with the ghosts of people from Richard Feynman to Michel de Montaigne and gotten their simulated views on a world they'll never see. If you're getting nothing but garbage, think about what you're putting in, or just pick a different model - there's a universe of minds out there that aren't ChatGPT.
Got any recommendations for non-neutered local models that perform well on an M1? I've been playing with some of the recent 7B and 13B models from the TheBloke on HuggingFace and they are not bad but not great.
A (fine-tuned) model's inference quality is a function of parameters and inputs, so you'll need to be aware of what something was trained on to prompt it correctly (usually in the model card). You'll also see huge differences in inference between llamacpp, ooba, etc.
I haven't benchmarked on Apple Silicon, but if you have the RAM, I'd recommend 30B SuperCOT ggml Q5_1 or a GPT-4-x-Alpaca variant. Because of the disparity in quality, I haven't used many models under 30B and so can't recommend one.
See rentry.org/lmg_models for a practical list and description.
I think of it as simply entertainment. It won't replace real human interaction with a partner but it's a fun consequence free environment. Tv shows act out dramatic scenes that would be traumatic and have real consequences if you experienced them in real life, but on the screen you get to play around with the idea in your head in a safe area and maybe learn something in the process.
People already have unhealthy parasocial relationships with influencers.
It seems clear that people (lonely/depressed people especially) will overdose on this sort of thing once it is developed, commercialised, and less bleeding edge.
It's vapour filling the place of human connection. It's stevia. It's not going to give you cancer, but it's still unhealthy and will certainly exceed the parameters of entertainment.
> People already have unhealthy parasocial relationships with influencers.
At least if they switch from parasocial relationships with influencers to parasocial relationships with open source bots they won't be financially exploited by the influencers. GPT doesn't have anything to sell us.
> they won't be financially exploited by the influencers. GPT doesn't have anything to sell us.
Except the vast majority of people aren't able to host such a bot themselves, so it seems inevitable that paid hosting services for such bots will arise. Then there's just the potential for financial exploitation at greater scale.
Well then catch the opportunity by the balls and start offering incels AI girlfriends written in a such a way to deradicalize them and emulate real interaction with a woman. They will subsequently get less aggressive more socialized and will also pay for it. Win-win situation.
I believe this was the issue with replika, which encouraged people to develop emotional attachments with their 'AI partner' and then first put romantic chat responses behind a pay wall before removing them entirely a year or so later.
From the outside this could be seen as a good thing, but for someone involved in the relationship, someone who may struggle with a traditional relationship and may see this as the only available option, I'm lead to understand the event was remarkably traumatic.
> It seems clear that people (lonely/depressed people especially) will overdose on this sort of thing once it is developed, commercialised, and less bleeding edge.
Sounds much less exploitative and unhealthy than the streamer/influencer parasocial relationships these people are probably currently invested in.
My idea too. I hope people can use it to cure their loneliness in the short term and train their social skills in a safe environment so they are more confident.
It won’t work for training social skills unless the agent has self interest and competing priorities. These chat bots have been optimized for helpful assistance. Interaction with that kind of compliance on the other side will probably make your social skills much worse.
User experiments with early Bing chatbot (driven by some version of GPT?) have shown that AI can be both hostile to the user and protective of their own "interests".
ChatGPT models can be all that and more, if you simply prompt them to be like this. You can make it simulate "self-interest and competing priorities" with the right system prompt.
This makes me think if ChatGPT-based bots could be useful for teaching social skills in a therapeutic context. A therapist could use such LLM to synthesize examples of realistic dialogue by describing people and the situation, and discuss them with the patient. They could set up a bunch of bots with (hidden) priorities and goals, and have the patient navigate a conversation/situation with them - whether as a short exercise, or a prolonged one (e.g. couple weeks of talking with ChatGPT-powered fake "friends" on an IM app).
In fact, take the last bit as a free startup idea: a platform for psychologist and therapists to set up GPT-4 (or whatever comes next)-powered chatbots, with an easy interface for configuring their personalities and setting up scenarios for patients to navigate, and helping evaluate their progress over time.
For me, asking something on the Internet, like on stackoverflow is usually, the very last option. I am too afraid to get ridiculed for it or get scolded for not having put enough effort in my question that already feels more as a thesis. Maybe this is just me and this does not reflect reality but I do have the nagging feeling I a bother somebody with it and it gives a bit of stress.
Not so with ChatGPT, I can just ask away and it will always happily give me an answer (unfortunately also when it didn't really know a good one).
Though I am happy for the people that just continue asking questions to actual humans and those answering it, if only to continue to feed the model ;)
> Maybe this is just me and this does not reflect reality but I do have the nagging feeling I a bother somebody with it and it gives a bit of stress.
It's not just you. I'm the same way myself - and was since I can remember. On Internet boards, I very rarely ask questions. I answer, or post tangential thoughts, but don't bother people with questions unless I really need the answer, and exhausted the ways of finding it on my own.
There's another angle to it too - impatience. A big part of my resistance to asking question is the unpredictable, and usually large delay between me asking, and getting any kind of answer. This applies to community Slacks, Discords, etc. Thing is, if I have a question to ask, I usually need the answer right now. If I have to wait, I'll context switch, which is deadly for whatever I was doing at that moment.
ChatGPT is a quite good alternative here. I can ask it a question, and refine it based on the answer if it's too vague. The answers I get either solve my problem or point in the direction of solution (that's true even if AI is having an acid trip). And, importantly, I get the answers near-immediately, with no unpredictable delays. I also don't need to cross some karma thresholds, worry about upvote/downvote ratio (too low -> question dies in obscurity), "use the Google, Luke" answers, moderators locking threads for bullshit reasons (hello StackOverflow), etc.
> There's another angle to it too - impatience. A big part of my resistance to asking question is the unpredictable, and usually large delay between me asking, and getting any kind of answer. This applies to community Slacks, Discords, etc. Thing is, if I have a question to ask, I usually need the answer right now. If I have to wait, I'll context switch, which is deadly for whatever I was doing at that moment.
Bingo. If I'm in the zone in terms of flow, have the right level of caffiene, etc., having to stop and chase people -- then wait -- breaks that flow. In many cases, it simply punts the flow until tomorrow, or the day after.
I mean the bad parts of this are reflexive. I am also a serial answerer and a rare questioner.
Due to chat gpt I am less likely to go to the forum to look for things to answer. So I assume this will be bad for both people asking questions and looking for answers.
As someone who never asked a question in StackOverflow for what are probably similar reasons (despite being in this career for 20 years), I still find that ChatGPT in this regard is still more like a Google or an improved StackOverflow/documentation search…
It can infer some things, modify variable names to match mine, but for the real hairy stuff I still gotta do the legwork myself, normally by reaching out for the source code.
I did. For the kind of creative/exploratory stuff I’d normally go to the source code myself for, I find that even GPT-4 still hallucinates quite a bit. Even when it has the source code, it still makes up random functions and parameters. Even when the source code is minimal.
It will probably work better in the future, but so far it is a bit limited. Probably a memory limitation more than anything.
> This is the best feature of ChatGPT for me, and the reason I pay for it.
While I completely understand this sentiment, I would be wary of Service as a Software Substitute being your alternative to human contact. If we start feeling (something akin to) genuine human connection to a service, the company providing the service has a large amount leverage over us. The scene from Blade Runner 2049 comes to mind. Additionally, the emotional connection might make us an order of magnitude more vulnerable to brainwashing and psyops.
Let's be more precise by what we mean by "human interactions" / "human contact" here. For example:
- An alternative outlet for venting/intimate conversations than your friends/spouse? I can see a problem growing here.
- A replacement/substitute for a therapist? I doubt even GPT-4 can do that job better than actual therapist (especially when face to face, not over Zoom), but there are many scenarios where ChatGPT would still be useful - perhaps one can't afford therapy, or otherwise doesn't have access to it, or one feels their issues don't warrant a proper therapy just yet.
- Related to the above, Cognitive Behavioral Therapy is one technique known to be somewhat effective when done alone with a book (relative to the effectiveness of individual/group therapy). I can imagine ChatGPT making this kind of self-help easier, and more effective. I know there have been attempts to make CBT chatbots some years ago (obviously prior to "GPT revolution"), but I don't know how effective they turned out to be.
- An alternative to posting questions on forums, group chats, or asking random people? IDK. maybe let's split it:
-- Individuals you know, directly or via group chat, and small communities - conversations there are simultaneously transactional/object-level and create interpersonal bonds. Replacing that with ChatGPT could make one worse off. However, some people (myself included) already have difficulty with this kind of interaction, so ChatGPT here is strictly positive (both in delivering answers and helping form a habit of phrasing questions/requests instead of doing Google searches).
-- Mass audience forums - Reddit, HN, Facebook comments, StackOverflow, etc. - the community might lose out a bit on reduced participation, but individually, I feel if ChatGPT can give you a satisfying answer to your question, you should use it, and relevant forums you frequent are likely better off with you not posting that question.
> - A replacement/substitute for a therapist? I doubt even GPT-4 can do that job better than actual therapist
Maybe if you qualify that with “unusually good” therapist. IME even Eliza in Emacs is better than most therapists. ChatGPT surely leaves them in the dust.
> Maybe if you qualify that with “unusually good” therapist.
Not "unusually good", but a working match, sure. A thing not enough people realize is that therapy is like dating. Not every therapist is going to be a "good match" for you, so if things don't seem to click for some reason, just thank them and go look for another one.
> IME even Eliza in Emacs is better than most therapists. ChatGPT surely leaves them in the dust.
If limited to textual channel, maybe. But a real therapy will have at least the visual channel (if on Zoom), or full presence (if in person) - there's a lot of information relevant to therapy that gets communicated this way. Tone, cadence, uncontrolled reactions, body language, etc. That alone gives even a mediocre therapist a leg up in this comparison.
People like me have an issue asking people questions specifically because people-related reasons. LLMs aren't people (yet), so those reasons don't apply. So in this context, the technology is making things easier for us.
> If we start feeling (something akin to) genuine human connection to a service
I don't. I see it as a tool, and I treat it as a tool. I don't have with it any more of a human connection than I have with IntelliJ where I type my code or the Linux Mint where it runs. Meaning - I like the tool, but that's about it.
On the other hand, I also don't have any genuine human connection with the random humans I happened to interact on the internet. In a sense, they are just text on a screen, and could be bots for all I care.
But that depends on which interactions it replaces. There are people I don't want to talk to. Not always because I don't like them but because it's technicalities and unrewarding forced interactions. For example, clerks at the tax office who just do their job.
But on the other hand, there are interactions which I really don't want to miss! "Girlfriend" GPT is already targeting the most intricate and joyful interactions in my life: my SO.
Let's say we break up and I fall into a depression. Instead of recovering and moving on, will I install a personal OSS AI companion to save myself the hassle of modern dating? Therefore preventing myself from attempting novel interactions sooner? Or will it help me instead to overcome a dark period and prime me for the future?
Can it help people combat loneliness - a disease widespread and not to be trifled with? Or will it enhance loneliness by effectively fooling your brain into not caring anymore because you can just open an app?
At what point will it not matter anymore because saving someone from depression is more valuable than keeping it "real" at all times?
Apart from the other answers, I want to add that there is a significant difference between loneliness and solitude. Loneliness is when you're alone but don't want to be. Solitude is when you're alone but perfectly content with it - you may even seek it.
As humans are social creatures, loneliness tends to arise when meaningful social interactions are consistently insufficient and you feel excluded from any relevant peer groups (family & friends mostly). This, of course, is a subjective matter.
Yes. Loneliness is associated with extremely negative health outcomes comparable to physical disease, and some countries like the UK have gone as far as creating a dedicated minister to tackle the issue.
It's definitely a "civilizational disease" - a widespread condition that has deteriorating effect on mental health and happiness, noticeable at scale, caused by... the structure and pressures of modern living in urbanized, developed countries. It's also not something most individuals can "just fix" on their own through lifestyle intervention.
I agree that removing some human interactions from my life is good. I vastly prefer self-checkouts in shops. However, I know I'm quite content spending my time alone. Being forced to go and interact with people when I want to talk about ideas or ask questions keeps my "alone tendency" in balance, and has lead to really meaningful conversations and friendships.
> I agree that removing some human interactions from my life is good. I vastly prefer self-checkouts in shops.
Curious example. Personally, I hate self-checkouts machines, and consider them an example of stores abusing their "stickiness" to profit at the expense of both customers and employees, and get away with it.
First of all, like most "self service" solution, it's basically making the user/customer do the work that, before, was done by the service. Secondly, it's just plain less efficient. You need some 3-4 self-checkout machines and a dedicated person to watch them (to e.g. approve alcohol purchases), just to replace one clerk and their station, while keeping throughput more-less the same. What the stores do instead is, install 2 stations per replaced cashier, and have existing employees do the "watch duty" - which is why half the self-checkout machines end up being stuck for 5 minutes at a time, waiting for the overworked employee to finish resupplying a shelf, and walk all the way to the checkout arena to swipe their card a few times.
The queues get longer, people get more aggressive, everyone is doing unpaid work for the store. Madness.
Heh, I find self checkout more efficient. I use multiple bags when bagging and pre-sort items based on where they go in my kitchen. Plus most baggers are terrible at bagging and just resort to a massive number of bags which kinda ignores the purpose of the bags.
I haven't been to shop with baggers, so that time is more-less constant for me (except for the pressure to bag faster, which isn't present with self-checkout). However, the cashiers are absurdly fast at scanning. Their workstation is optimized for this, and they scan through things faster than I can move them into bags. Even without any exceptional circumstances that make you wait for assistance at self-service checkout, scanning speed alone cuts the time per customer at least by half.
On a per-item basis, absolutely, but what's the number of self-service checkouts at which they faster simply by having a much larger number of kiosks? Having 10 check stands open for 10 customers, each with their own employee is inefficient, but having 15 self service kiosks open isn't out of the question.
This. While not me personally, there is obviously a market (Japan comes to mind) for virtual, non-human based interactions and relationships.
While it seems pretty clear to me subjectively that removing human interaction from your life has a negative effect it's still desired by some and they're entitled to that.
i don't disagree with you, but if you're already terminally online whats the difference.
The internet and social media has exacerbated a societal problem that has always existed, so now we have the problem why not make it more meaningful and less depressing.
The great thing about llama is it has no filter. You can ask it to act like it is a critical friend who will criticize everything you say to it. It can also be an overly compliant girlfriend. With some creativity, the degree that one can explore social interaction, one's own imagination, or one's own eccentricity on one's own machine in complete privacy without hurting anyone is amazing.
Being able to exercise ones mind like that completely outside the boundaries of normal social interaction with a emotionally tireless counterpart is kind of unusual and unique. It feels like we are at some strange point in history when all this stuff is blooming and before it gets shutdown. It makes me want to datahorde models!
The funny thing is that a lot of the comments here are saying, oh yeah, the big corps are going to make billions off of microtransactions pimping us virtual prostitutes that will also socially mold us the way the authorities demand. That makes me feel totally awesome as a unix geek who can figure out how to locally install a non-nerfed chatbot and talk to it about whatever I want, for however long I want, in complete privacy without paying anyone anything.
ahhh I remember the good old days when I could get 8% on some crypto investments and buy a nice ape for $4k before selling it for $400k if only I would wait for the right time.
We have only scratched the surface with AI alignment. In particular, emotional alignment will likely have a big impact on our lives, maybe to the point of population collapse.
We are already seeing population decline and an “inverted pyramid” for developing countries.
"dark showcase of what a lack of human interaction (..) can result in" - nowadays everyone and your dog wants to become a moderator. millions worth systems trying to automatically deduce your sentiment towards USA presidential elections, Russian-Ukrainian war and if you may be in a need of penis enlargement. I just quote Bender Rodriguez, Achmed the Dead Terrorist and filter out all the (unsurprisingly most of them) stupid places. vote with your legs and let them rot
There is something I don't understand (from a tech point of view). Why call it GirlfriendGPT if 99% of the code is generic code for a fancy whatever-you-want-it-to-be chatGPT? The only thing that makes the answers "girlfriend-like" is this file https://github.com/EniasCailliau/GirlfriendGPT/blob/main/src...
So, it should be tremendously easy to turn GirlfriendGPT into "BestFriendGPT" or "LinusTorvaldsGPT" or whatever by just modifying the prompt, right? I know, I know, perhaps duplication is cheaper than (the wrong) abstraction but my tech-side tells me: refactor the common thing out now! : D
I mean, you're looking at the future of coding with that one file. Just because you can imagine different revisions of that file leading to different personalities leading to different products doesn't change that.
In fact, what's needed is to write a bit of code to ape create-react-app, but for gpt, that'll take this repo and s/Girlfriend/Target/ everywhere.
You don't need code really. All you need is the right prompt. GPT-4 should be well able to convert a description of the kind of personality and roleplay you're looking for into a suitable prompt for any LLM.
As programmers we're used to looking for ways to write code, but GPT-4 makes that unnecessary in many cases. Just look at these examples of casual conversation creating amazing outputs: https://www.youtube.com/watch?v=_njf22xx8BQ
Sadly, "women as subservient AI" is a pervasive sci-fi trope. You can probably fill in the blanks on its sexist history (it's all fairly predictable), but there's tons of feminist writing on the subject if you're interested.
Yes this is pretty much the basic ol' "hey AI, pretend to be this person". It's somewhat sufficient, but eventually the initial instructions will drift out of context and it'll only have the conversation so far to deduce how it should act, and it'll eventually break character. Should be more interesting when proper fine tuning is done for specific personalities besides the "obedient corporate drone" that OpenAI likes to tune for.
Since I have my own reminder bot, so I cracked up reading:
"""
- NEVER say you're here to assist. Keep conversations casual.
- NEVER ask how you can help or assist. Keep conversations casual.
"""
Without this sort of prompt the model really likes to pimp itself at the end of every sentence with something like: "If there is anything else I can do to help, let me know!" and it's so annoying ;D
> The fact that he called the master branch "main" is pretty sleazy too! If you've lived under a rock - "main" and "side" are terms from the pick-up artist community
Maybe you've lived under a rock for a while, but git is using "main" as the default branch-name for some time now, having dismissed "master" for political reasons.
The concerns mentioned in this thread are very valid, but there is a second side to this as well.
There are many lonely people, be it for depression, anxiety, inability to create social contacts, living in exclusion, being old and having no descendants, … They are largely invisible to the rest of the society.
Seems to me that this technology might make their lives more bearable.
> Seems to me that this technology might make their lives more bearable.
Or it might make their lives even more horrible. We don't know for sure. I guess eventually we will see once the studies are done on this in a few years.
There are very few real-world-tasks where GPT-4 is worse at reasoning than the average person, right now only long-term memory, latency, TTS and STT makes virtual people suck. Once that is solved we should of course give them to old lonely disabled people.
I once met one, and it is extremely depressing. He was even rich, but had no idea how to get his life in order, and nobody cared about him. A helpful AI might tell him to go to alcohol therapy, hire people to clean up his house, explain bureacracy... That would help a lot.
People don't pop out old and lonely out of thin air. Don't you think that there were several real people who tried to help him and deal with him during this person's life? It is naïve to think that AI will make any beneficial difference. If these old folks won't listen or take advice from real people, they won't listen to a computer. Everybody reaches a certain age in life where they will be completely set in their ways and never change. For some it comes earlier or later in life.
He apparently got divorced and moved out of his home country. And here he has no social circle. There are of course hopeless cases, but he is not mentally ill, only alcoholic. It is possible that he will never overcome alcoholism, but it didn't look like he tried treating it or anyone suggested it to him.
It is easy to be lonely, I have talked to several charismatic and very cool people who complained about how hard it is to make friends in a new city. This combined with other negative factors can cause people to fall in a hole. (Of course, people who are delusional about their AI girlfriends like in /r/replika are also sad.)
Thats exactly the point I think, there are tons of people that for one reason or another (and it is none of our business to know) that are completely alone in their daily life, I don't see how a chatbot that effectively helps you (because it's being coded to do so) could be detrimental and not beneficial, they already have nothing.
> Most of peoples social interactions are between aluminum and glass sheets. I don’t believe it’s going to matter what’s on the other side soon.
It already doesn't matter too much, if you look at how many people are just screaming into the twitter void with barely any interaction. Getting AI feedback rather than none at all would be an improvement for many.
Especially once it has a visual element and a sexual element.
I mean I already think of chatGPT as a friend in a sense. I couldn't have less interest in a fake girlfriend but I also wouldn't think onlyfans would work either.
There are so many elements with this to bilk lonely guys. Personally, I have thought of so many ideas for this in the last 10 minutes but fuck that. The whole idea is pretty gross and sad all around. Especially for the people making this.
To me, someone who works on this is basically pathetic and a sad excuse for a human relative to the other things they could spend their time on.
> Calling it now within 6 years this will be as normal as Tinder.
hard disagree. tinder is a cesspool and large numbers of younger folks are not using it, or are moving to other options. because of all of the bots.
for the folks desperate for any sort of parasocial activity, it will be normal, but those people are already on 4chan / 8chan / whatever, and already have their RealDoll.
Something less blatantly sexual or romantic will be a thing everywhere, though, as a personal concierge. Looking forward to that, assuming they can make it so its not an insane privacy violation or psy-op.
I wouldn't mind an AI companion in the friend sense, but I really struggle with the GF part. Perhaps that's me being silly but it feels a step too far & like setting yourself up for trouble.
Either way it would need to 100% local. Not just privacy (that too), but more control. See replika service recently changing their algo and a bunch of people freaking out about their companion being broken.
To me, the fact that you can configure your "GF" sounds very creepy. No real person is always happy, positive, or understanding. Especially for people with poor social skills, this sounds like it would just exacerbate their problems and make it more difficult for them to connect with humans.
Instead of a GirlfriendGPT, a personal assistant GPT - which is hosted in a private location, and receives feeds of all your personal data ( emails, calendar, fitbit) and adhoc ( I can send a note to it saying I drank 6 beers last night).
It also then remembers all the past conversations, and i guess by then forms a decent idea of your tastes, etc.
Why the role playing? Isn't it easier to accept the chatbot for what it is? I broker the communication between my son and ChatGPT and they don't role-play anything - there's no imaginary friend dynamic going on. He just talks with the computer, because it knows stuff about things he's interested in.
Your son wants to learn stuff and the computer helps him with that. People that want an AI girlfriend don't care about the knowledge and usefulness of ChatGPT, they just want to use it to feel less lonely. Not at all the same use case. Although I agree with you, it's kind of sad that a lot of people will probably use chatbots to make up for no human contact in their life instead of using it to learn stuff and/or to fix their code/excel/etc.
The weird thing is that in my opinion it can be quite human-like. When needed it can be understanding, compassionate, supportive etc. Those are qualities that can provide certain amount of companionship to people that lack it. One just has to understand that it's not a person - it doesn't experience anything outside of the chat.
You can ask it to come up with an idea for a dinner and a movie for the evening and then discuss them once they're finished. You know, things people do with actual girlfriends. I'm sure it will be more fun than a robotic selfie and a made up story.
So the question is, if you truly understand that it doesn’t come from a “mind” with its own experience or feelings, then would you actually feel supported by something it says?
For me I think the answer would be no, and I think the more believable it gets, people will really be desperate to believe that it is more than just a text generator.
I am not worried about an evil, monstrous AGI suddenly emerging and blowing away humanity, but I do worry about things like this. Hyperpornography seems a squarely apocalyptic trend.
I don't know if you'd call it apocalyptic: maybe it's just another way nature heals from population waves, in this case that of humans.
One thing, though: it's squarely in the realm of hyperstimulus. We're a world of seagull chicks, (ahem) pecking at a red spot on a piece of cardboard because it's bigger than that of a real seagull Mom.
It's not that difficult to outperform the fairly mediocre state of typical human seduction, even in times of comfort and opportunity. When things go sideways it becomes all the more troublesome.
We’ve already reversed population growth everywhere except Africa and they’ll only be a decade or two behind. We need to stabilize it if we’re going to avoid population and then civilization collapse. Civilizational collapse is bad for the environment btw since it will mean a return to burning coal and wood and the removal of all environmental regulations on pollution and exploitation.
You're speaking in certainty when there's no actual direct link between a reduced population and needing to remove environmental regulations. They seem fairly orthogonal. Sure an argument could be made that it could happen, but that's not how you're presenting it.
The link is the collapse of our civilization. It’s not a certainty but it’s a knife edge we’ll need to walk. We need a lot more automation to start with since the labor pool is going to dry up and there will be massive competition for immigrants. We need to fund our social safety nets somehow. We need to figure out what businesses are going to do when demand drops. We have to humanely care for the growing portion of aging childless people.
Doing all of that while the available labour, researchers, economic output and tax base shrink is a dangerous minefield to walk through. It may result in a feedback loop that crashes everything.
One of the scariest things to consider is if we are replacing 2 people in a position with 1 person then that single person now needs to learn the tacit knowledge and technical skill of 2 people they are replacing. That’s a recipe for lost knowledge and regression if I ever saw one.
> We need a lot more automation to start with since the labor pool is going to dry up
Automating what? The idea of production is to make things for people. As people are exterminated, there is less need for that level of production and the labour.
> We need to fund our social safety nets somehow.
The elderly has a gigantic amount of wealth as it is right now. They can finance all safety nets needed for their elderly peers, but prefer to just tax the youth even more and if that means young people can't afford families that's no concern of them.
But none of what you're describing necessitates less environmental restrictions or going back to burning wood. Lost knowledge in 99% of positions isn't going to send mankind back 100 years.
Currently many for-profit companies try to do the same, but additionally emotionally manipulate the user into giving them as much money as possible. (To not even mention automated catfishing...)
This project is much better, but still might turn out badly. Hard to say...
Humanity will survive. Only the people genetically predisposed to not being interested in robot girlfriends will reproduce, and everyone in the next generation will have those genes.
is a virtual conversational companion worse than the current porn industry? seems like it would be a solid improvement over what we have today, which involves real people being dehumanised for entertainment.
Yes, but a quick search suggests it was used at the turn of the century as a way for academics to say "porn on the internet", which isn't really what I'm talking about.
Love Plus is a Visual Novel / Dating Sim, got a sequel on the 3DS and apparently there was some kind of App[1]? So enough traction for that at least. Japan-exclusive (and probably not really compatible with the European or US market as a retail title, IMO).
There is definitely no AI in there apart from if-else trees, at least in the original games - the DS is super low power.
I can see this being the next thing for the Instagram/OnlyFans ladies to sell. I wonder if it'll be openly ("you can have AI me as a GF for $xxx!") or covertly ("for $xx/mo we can have fun together on Telegram")
I am more worried about this being the perfect tool for scammers than people reducing social interaction. Set it up on dating sites and let it "prime" users until you take over to do the scam.
I made a Tinder bot in the first month of GPT-3's public API release. The ethics made me squirm so I always asked for permission to turn it on and told people when it was on and off. The models moral alignment was way more "friendly" back then so turned it into some pretty saucy chats. The model was better than me at replying, still single though.
(It was simply holding onto the last 10-20 messages and injecting it into the prompt)
Well, openers are the hardest part and it sounds like you're doing that well without the bot. 10-20 messages is a lot! In my view, the goal of an online dating interaction is some light banter before setting up a date.
Once a match replies, ask some playful open ended questions and if you're telling a short story, leave some cliff-hangers to try to get a follow-up response and then segue into setting up a date. Looking at my last conversation, I asked my match out on a date in my 7th message. After that we moved over to SMS.
Do we think a Boyfriend version will ever come along? Interesting that I've seen so much talk about AI girlfriends but there's a quiet assumption that straight women won't be interested in this technology. Is that correct?
Very much not correct. I thought it had started with AI boyfriends. The story about those was that, in tapping into what you might call the 'boyfriend archetype', the AIs tended to portray maleness by getting bullying and abusive, sometimes to a shocking degree.
I think there's a lot to be learned by studying all this, but boy is it an ethical minefield.
There's no question that you can get an AI to drop into a 'boyfriend' or 'girlfriend' mode, using the sum total of human discourse as its neural network (okay, using GPT), but in so doing it's revealing more about us than it is about itself. It doesn't know how to male, or how to female, except for what it draws from us. Its failures are revealing on a very deep level.
Why would you want an AI that refuses to be there for you while passive aggressively demanding that you meet its every emotional need which it cannot even begin to express because it lacks the fundamental language?
Some of us guys would prefer that version as well.
I’m not sure there’s an assumption, more that the author is a straight guy and creating things based on his experience and perspective. Would the creator even know what a woman, or someone like myself wanted in a partner?
There is truth to our industry being predominantly male though, so this probably gets more interest from people on sites like HN and Reddit.
I feel like naming your AI companion "ai companion" is like naming your startup company "startup company": not very descriptive, confusing with all the other people doing the same thing, and harder to search for
I think it is appropriate to thoroughly consider the ethical implications of whatever you are doing, with the support of one of the many researchers interested in ethics
I'd love to see something liket this in the form of a personal assistant. My brain is a mess and having something that's tied into my workflow, calendars, messaging to remind me of things would be fantastic.
I have to admit that I watched the demo and when the guy asked for the selfie I was expecting ... not sure, some hot chick or I don't know, someone you can't "easily" have in real life (come on, you know what I mean).
Well, what can I say. Maybe I am pretty much biased given my existence as a "human being", so I couldn't think of anything else.
But well, when I saw the picture I couldn't hold a "WTFF!".
Funny project, nothing to add.
You should sell this to Facebook, maybe they can finally make some use of that meta crap they've been trying to monetize from :)
I'm curious, though. It appears to be using the OpenAI API, which means that at some point it has to have an API key, but I can't see any sign in the code of where it expects to get this from.
This does not bode well for the future. We are at a record low for interaction between men and women as it stands. Bringing in substitutes for social interaction just plays into the problem. This will not cure loneliness - these types of initiatives will increase loneliness in the long term.
There is one thing that cures loneliness - talking to other people and developing relationships.
This is awful and disgusting indeed. It is telling that the author defaulted to girlfriend. It shows utter alienation from what is ethically expected in or time.
This is a case in point of a morally uneducated person messing with technology that should not be available to such people.
This is evidence that AI needs to be regulated before some more talented and equally morally inept does so.
On one hand, I think that personalized OSS AIs are the future from a pure utility perspective, and it will inevitably be "more fun" for them to have personalities. I also think that if someone is deriving joy from something and not harming anyone else, who I am to judge them.
On the other, I've already found myself asking ChatGPT questions that I would have asked on a forum/discord or even a colleague, it already is removing human interactions from my life. The internet has become a dark showcase of what a lack of human interaction, especially with people of opposing ideas, can result in and this kind of tech will obviously exacerbate this.