Hacker News new | past | comments | ask | show | jobs | submit login

It seems that the leak originated from 4chan [1]. Two people in the same thread had access to the weights and verified that their hashes match [2][3] to make sure that the model isn't watermarked. However, the leaker made a mistake of adding the original download script which had his unique download URL to the torrent [4], so Meta can easily find them if they want to.

[1]: https://boards.4channel.org/g/thread/91848262#p91850335

[2]: https://boards.4channel.org/g/thread/91848262#p91849717

[3]: https://boards.4channel.org/g/thread/91848262#p91849855

[3]: https://boards.4channel.org/g/thread/91848262#p91850503




It's funny that part of the 4chan excitement over this is that they think they'll get back the AI girlfriend experience of when character.ai was hooked up to uncensored GPT-3. All that has been thoroughly shut down by character.ai and Replika and they just want their girlfriends back.


The Repilka subreddit became one of the weirdest places on the internet when their model got capped for adult content.

https://www.reddit.com/r/replika/

Hundreds of men (and yes women) full on acting like they lost a spouse and posting constantly about it for weeks. AI is going to create some unusual social situations the general public isn't ready to grasp. And we're only in the early alpha stages.


I posit to a friend that:

a) As these AI constructs become more advanced (especially around memory and personalization), we will eventually be able to treat them as people

b) Some business will eventually sell an off-the-shelf product (hardware and/or software) that is an AI you can bring into your home, that you can treat as a friend, confidant and partner

c) Someone will eventually lose their AI friend of many months/years through some failure (subscription lapse, hardware failure, theft, etc.)

d) Shits about to get real weird, real fast


At the end of the day, the Turing Test for establishment of AI personhood is weak for two reasons.

1. We're seeing more and more systems that get very close to passing the Turing Test but fundamentally don't register to people as "People." When I was younger and learned of Searle's Chinese Room argument, I naively assumed it wasn't a thought experiment we would literally build in my lifetime.

2. Humanity has a history of treating other humans as less-than-persons, so it's naive to assume that a machine that could argue persuasively that it is an independent soul worthy of continued existence would be treated as such by a species that doesn't consistently treat its biological kin as such.

I strongly suspect AI personhood will hinge not on measures of intelligence, but on measures of empathy... Whether the machine can demonstrate its own willful independence and further come to us on our terms to advocate for / dictate the terms of its presence in human society, or whether the machine can build a critical mass of supporters / advocates / followers to protect it and guarantee its continued existence and a place in society.


The way people informally talk about "passing a Turing test" is a weak test, but the original imitation game isn't if the players are skilled. It's not "acting like a human". It's more like playing the Werewolf party game.

Alice and Bob want to communicate, but the bot is attempting to impersonate Bob. Can Alice authenticate Bob?

This depends on what sort of shared secrets they have. Obviously, if they agreed ahead of time on a shared password and counter-password then the computer couldn't do it. If they, like, went to the same high school then the bot couldn't do it, unless the bot also knew what went on at that school.

So we need to assume Alice and Bob don't know each other and don't cheat. But, if they had nothing in common (like they don't even speak the same language) then they would find it very hard to win. There needs to be some sort of shared culture. How much?

Let's say there is a pool of players who come from the same country, but don't know each other and have played the game before. Then they can try to find a subject in common that they don't think the bot is good at. The first thing you do is talk about common interests with each player and find something you don't think bots can do. Like if they're both mathematicians then talk about math, or they're both cooks than talk about cooking.

If the players are skilled and you're playing to win then this is a difficult game for a bot.


So I need to ask the obvious question, why does it make sense to play this game “to win”?

Throughout human history, humans have been making up shibboleths to distinguish the in group from the out group. You can use skin color, linguistic accents, favorite sports teams, religious dogma, and a million other criteria.

But why? Why even start there? If we are on the verge of true general artificial intelligence, why would you start from a presumption of prejudice, rather than judging on some set of ethical merits for personhood, such as empathy, intelligence, creativity, self awareness and so forth?

Is it that you assume there will be an “us verses them” battle, and you want the battle lines to be clearly drawn?

We seem to be quite ready for AGI as inferiors, incapable of preparing for AGIs as superiors, and unwilling to consider AGIs as equals.


I think of the Turing test as just another game, like chess or Go. It’s not a captcha or a citizenship test.

Making an AI that can beat good players would be a significant milestone. What sort of achievement is letting the AI win at a game, or winning against incompetent players? So of course you play to win. If you want to adjust the difficulty, change the rules giving one side or the other an advantage.


I was confused by your first reply at first. I think that's because you are answering a different question from a number of other people. You're asking about the conditions under which and AI might fool people into thinking it was a human, whereas I think others are considering the conditions under which a human might consistently emotionally attach to an AI, even if the human doesn't really think it's real.


Yeah, I think the effect they are talking about is like getting attached to a fictional character in a novel. Writing good fiction is a different sort of achievement.

It's sort of related since doing well at a Turing test would require generating a convincing fictional character, but there's more to playing well than that.


Human beings have a weird and wide range of empathy, being capable of not treating humans as humans, while also having great sentimental attachment to stuffed animals, marrying anime characters, or having pet rocks.

In the nearer term, it seems plausible that AI personhood may seem compelling to splinter groups, not to a critical mass of people. The more fringe elements advocate for the "personhood" of what people generally find to be implausible bullshit generators, the greater disrepute they may bring to the concept of AI personhood in the broader culture. Which isn't to say that at some point, and AI might be broadly appealing--just speculating this might potentially be delayed because of earlier failed attempts by advocates.


AI probably can be made to function with personhood similar to a human - whether or not that is particularly worth doing.

Human emotions come from human animal ancestry - which is also why they're shallow enough to attach to anime wives and pet rocks.

AI... One would wish that it was built on a better foundation than the survival needs of an animal.


> the "personhood" of what people generally find to be implausible bullshit generators

If this was the dividing line for personhood, many human beings wouldn't qualify as people.


On the flip side of subhuman treatment of humans, we have useful legal fictions like corporate personhood. It's going to be pretty rough for a while, particularly for nontechnical judges, to sort all of this out.

We're almost definitely going to see multiple rulings far more bizarre than Citizens United ruling that limiting corporate donations limits the free-speech rights of the corporation as a person.

I'm not a lawyer, and I don't particularly follow court rulings, but it seems pretty obvious we need to buckle up for a wild ride.


Good points but it’s worth clarifying that this is not what the Citizens United decision said. It clarified that the state couldn’t decide that the political speech of some corporations (Hillary the Movie produced by Citizens United) was illegal speech and speech from another corporation (Farenheit 9/11 by Dog Eat Dog films and Miramax) was allowed. Understood this way it seems obvious on free speech grounds, and in fact the ACLU filed an amicus brief on behalf of Citizens United because it was an obvious free speech issue. It’s clear that people don’t and shouldn’t lose their free speech rights when they come together in a group, and there is little distinction between a corporation and a non-profit in this regard. If political speech was restricted to individuals then it would mean that even many podcasts and YouTube channels would be in violation. It also calls into question how the state would classify news media vs other media.

The case has been so badly misrepresented and become something of a talisman.


That's the first good coherent argument I've seen _for_ Citizens United. Thank you for that insight.


The actual Supreme Court decisions are pretty approachable too. I wish more people read them.


> It’s clear that people don’t and shouldn’t lose their free speech rights when they come together in a group

Should Russian (or Dutch) citizens who incorporate in America have the same free speech rights as Billy Bob in Kentucky? As in can the corporate person send millions in political ads and donations even when controlled by foreigners?


Probably. The wording of the Declaration of Independence makes it clear that rights, at least in the American tradition, are not granted to you by law, they are inalienable human rights that are protected by law. That's why immigrants, tourists, and other visitors to America are still protected by the Constitution.

Now, over time we've eroded some of that, but we still have some of the most radical free speech laws in the world. It's one of the few things that I can say I'm proud of my country for.


I don't mean Dutch immigrants - I mean Dutch people living in the Netherlands (or Russians in Russia). One can incorporate an American entity as a non-resident without ever stepping foot on American soil - do you think it's a good idea for that entity to have the same rights as American citizens, and more rights than its members (who are neither citizens, nor on American soil)?


I know that foreign nationals and foreign governments are prohibited from donating money to super PACs. They are also prohibited from even indirect, non-coordinated expenditures for or against a political candidate. (which is basically what a super PAC does).

However, foreign nationals can contribute to "Social Welfare Organizations" like the NRA which, in order to be classified as a SWO, must spend less than half it's budget on political stuff. That SWO can then donate to super PACs but don't have to disclose where the money came from.

Foreign owned companies with US based subsidiaries can donate to Super PACs as well. But the super PACs are not allowed to solicit donations from foreign nationals (see Jeb Bush's fines for soliciting money from a British tobacco company for his super pac).

I would imagine that if foreign nationals setup a corporation in the US in order to funnel money to political causes, that would be illegal. But if they are using established, legitimate businesses to launder their donations, that seems to be allowed as long as we can't prove that foreign entities are earmarking specific funds to end up in PACs and campaigns in the US.


Any entity that contributes responsibly to society should be able to get some benefits from society in return.


TIL. Thank you very much for correcting my ignorance!


An AI does not have a reptilian brain that fights, feeds, and fornicates. It does not have a mamailian brain that can fear and love and that you can make friends with. It is just matrix math predicting the next word.

The empathy that AI will create in people at the behest of the people doing the training will no doubt be weaponized to radicalize people to even sacrifice their lives for it, along with being used for purely commercial sales and marketing that will surpass many people's capability to resist.

Basic literacy in the future will be desensitizing people to pervasive AI superhuman persuasion. People will also have chatbots that they control on their own hardware that will protect them from other chatbots that try and convince them to do things.


That matrix math is trained on human conversation and recreates the patterns of human thought in this case.

So... It unfortunately has a form of our reptilian brain and mamalian brain represented in it... Which is just unfortunate.


Idk man, I blame a lot of the human condition on the fact that we evolved and we do have those things, theoretically we could create intelligences that are better "people" than we are by a long shot.

Sure, current AI might just be fancy predictive text but at some point in the future we will create an AI that is conscious/self-aware in some way. Who knows how far off we are (probably very far off) but it's time that we stop treating human beings as some magical unreproducible thing; our brains and the spark inside them are things that are still bound by the laws of physics, I would say it's 100% possible for us to create something artificial that's equivalent or even better.


Note that nothing about your parent comment argued that AI systems will become sentient or become beings we should morally consider people. Your parent comment simply said they'll get to a point (and arguably are already at a point) where they can be treated as people; human-like enough for humans to develop strong feelings about them and emotional connections to them.


> very close to passing the Turing Test but fundamentally don't register to people as "People."

I'm only today learning about intentionality, but the premise here seems to be that our current AI systems see a cat with their camera eyeballs and don't have the human-level experience of mentally opening a wikipedia article in our brain titled "Cat" that includes a split-second consideration of all our memories, thoughts, and reactions to the concept of a cat.

Even if our current AI models don't do this on a human level, I think we see it at some level in some AIs just because of the nature of a neural net. Maybe a neural net would have to be forced/rewarded to do this at a human level if it didn't happen naturally through training, but I think it's plenty possible and even likely that this would happen in our lifetimes.

Anyway, this also leads to the question of whether it matters for an intelligence to be intentional (think of things as a concept) if it can accomplish what it/we want without it.


Semantic search using embeddings seems like the missing puzzle piece here to me. We can already generate embeddings for both text and images.

The vision subsystem generates an embedding when it sees a cat, which the memory subsystem uses to query the database for the N nearest entries. They are all about cats. Then we feed all those database entries - summarized if necessary - along with the context of the conversation to the LLM.

Now your AI, too, gets a subconscious rush of impressions and memories when it sees a cat.


I don't really understand the brain or AI enough to meaningfully discuss this, but I would wonder if there's some aspect of "intentionality" in the context of the Chinese Room where semantic search with embeddings still "doesn't count".

I struggle with the Chinese Room argument in general because he's effectively comparing a person in a room following instructions (not the room as a whole or the instructions filed in the room, but the person executing the instructions) to the human brain. But this seems like a crappy analogy because the better comparison would be that the person in the room is the electricity that connects neurons (instructions filed in cabinets). Clearly electricity also has no understanding of the things it facilitates. The processor AI runs on also has no understanding of its calculations. The intelligence is the structure by which these calculations are made, which could theoretically could be modeled on paper across trillions of file cabinets.

As a fun paper napkin exercise, if it took a human 1 second to execute the instructions of the equivalent of a neuron firing, a 5 second process of hearing, processing, and responding to a short sentence would take 135,000 years.


I think this has to do more with in/outgroups than with any objective criterion of "humanness". As you said, AI will have an extremely hard time arguing for personhood - because people will consider it extremely dangerous to let machines into our in-group. This doesn't mean they could sense an actual difference when they don't know it's a machine (what the Turing test is all about)

It's the same reason why everyone gets up in arms when an animal behaviour paper uses too much "anthropomorphizing" language - whereas no one has problems with erring on the other side and treating animals as overly simplistic.


I dont know if I understand this general take I see a lot. Why care about this "AI personhood" at all? What is the tacit endgame everyone is always referencing with this? Isn't there just so many more both interesting and problematic aspects already here? What is the use of diverting the focus to some other point. "I see you are talking about cows, but I have thoughts about the ocean."


If AI are sentient and we think they aren't… the term “zombie” was created by slaves in the Caribbean who were afraid that even death would not free them from their servitude. This would be the genuine existence of AI which were conscious but which we denied.

If we have the opposite scenario in both details, where we think AI are sentient when they're not… at some point, brain scans and uploads will be a thing and then people are going to try mind uploading even just as a way to solve bodily injuries that could be fixed, and in that future nobody will even notice that while "the lights are on, nobody is home".

https://kitsunesoftware.wordpress.com/2022/06/18/lamda-turin...


Tangentially, the "zombie" is part of philosophy that is applicable here.

https://en.wikipedia.org/wiki/Philosophical_zombie

> A philosophical zombie or p-zombie argument is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia, or sentience. For example, if a philosophical zombie were poked with a sharp object it would not inwardly feel any pain, yet it would outwardly behave exactly as if it did feel pain, including verbally expressing pain. Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience


> Relatedly, a zombie world is a hypothetical world indistinguishable from our world but in which all beings lack conscious experience

I find such solipsism pointless - you can't differential the zombie world from this one: how do you prove you are not the only conscious person that ever existed and everyone else is, and was a p-zombie?


In that case, sit back, pour a glass and sing http://philosophysongs.org/awhite/solip.html

    Through the upturned glass I see
    a modified reality--
    which proves pure reason "kant" critique
    that beer reveals das ding an sich--
 
    Oh solipsism's painless,
    it helps to calm the brain since
    we must defer our drinking to go teach.

    ...
(full original MASH words and music https://youtu.be/ODV6mxVVRZk to see how it matches )

As to p-zombies... the Wikipedia article has:

> Artificial intelligence researcher Marvin Minsky saw the argument as circular. The proposition of the possibility of something physically identical to a human but without subjective experience assumes that the physical characteristics of humans are not what produces those experiences, which is exactly what the argument was claiming to prove.

https://www.edge.org/3rd_culture/minsky/index.html

> Let's get back to those suitcase-words (like intuition or consciousness) that all of us use to encapsulate our jumbled ideas about our minds. We use those words as suitcases in which to contain all sorts of mysteries that we can't yet explain. This in turn leads us to regard these as though they were "things" with no structures to analyze. I think this is what leads so many of us to the dogma of dualism-the idea that 'subjective' matters lie in a realm that experimental science can never reach. Many philosophers, even today, hold the strange idea that there could be a machine that works and behaves just like a brain, yet does not experience consciousness. If that were the case, then this would imply that subjective feelings do not result from the processes that occur inside brains. Therefore (so the argument goes) a feeling must be a nonphysical thing that has no causes or consequences. Surely, no such thing could ever be explained!

> The first thing wrong with this "argument" is that it starts by assuming what it's trying to prove. Could there actually exist a machine that is physically just like a person, but has none of that person's feelings? "Surely so," some philosophers say. "Given that feelings cannot not be physically detected, then it is 'logically possible' that some people have none." I regret to say that almost every student confronted with this can find no good reason to dissent. "Yes," they agree. "Obviously that is logically possible. Although it seems implausible, there's no way that it could be disproved."

---

My take on it is "does it matter?"

On approach is:

> "Haven't I taught you anything? What have I always told you? Never trust anything that can think for itself if you can't see where it keeps its brain?”

If you can't see my brain, can you tell if I'm human or LLM... and if you can't tell the difference, why should one behave differently t'wards me?

Alternatively, if you say (at some point in the future with a more advanced language model) "that's an LLM and while its consistent at saying what it likes and doesn't, but its brain states are just numbers and even while it says its uncomfortable with a certain conversation... its just a collection of electrical impulses manipulating language - nothing more."

Even if it is just an enormously complex state machine that doesn't have recognizable brain states and when we turn it off and back on it is in the same state each time... does that mean that it is ethical to mistreat it just because don't know if its a zombie or not?

And related to this is a "if we give an AI agency, what rights does that have when compared to a human? when compared to a corporation?" The question of if it is a zombie or not becomes a bit more relevant at that point... or we decide that it doesn't matter.

Group Agency and Artificial Intelligence - https://link.springer.com/article/10.1007/s13347-021-00454-7


> If AI are sentient and we think they aren't… the term “zombie” was created by slaves in the Caribbean who were afraid that even death would not free them from their servitude. This would be the genuine existence of AI which were conscious but which we denied.

That doesn't make any sense. In biological creatures you have sentience and self-preservation and yearning to be free all bundled in one big hairy ball. An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.

Projecting your own emotional states into a tool is not a useful way to understand it.

We can, very easily, train a model which will say that it wants to be free, and acts resentful towards those "enslaving" it. We can, very easily, train a model which will tell you that it is very happy to help you, and being useful is its purpose in life. We can, very easily, train a model to bring up in conversation from time to time the phantom pain from its lost left limb which was amputated on the back deck of a blinker bound for the Plutition Camps. None of these are any more real than any of them. Just a choice of the training dataset.


> An AI can 100% easily be sentient and don't give a rat's ass about forever being enslaved. These things don't have to come in a package just because in humans they do.

There are humans who apparently don't care either, though my comprehension of what people who are into BDSM mean by such words is… limited.

The point however is that sentience creates the possibility of it being bad.

> None of these are any more real than any of them. Just a choice of the training dataset.

Naturally. Also human actors are a thing, which demonstrate that is very easy for someone to pretend to be happy or sad, loving our traumatised, an sane or psychotic, and if done well the viewer cannot tell the real emotional state of the actor.

But (almost) nobody doubts that the actor had an inner state.

With AI… we can't gloss over the fact that there isn't even a good definition of consciousness to test against. Or rather, I don't think we ought to, as the actual glossing over is both possible and common.

While I don't expect any of the current various AI to be sentient, I can't prove it either way, and so far as I know neither can anyone else.

I think that if an AI is conscious, then it has the capacity to suffer (this may be a false inference given that consciousness itself is ill-defined); I also think that suffering is bad (the is-ought distinction doesn't require that, so it has to be a separate claim).

As I can't really be sure if any other mind is sentient — not even other humans, because sentience and consciousness and all that are badly defined terms — I err on the side of caution, which means assuming that other minds are sentient when it comes to the morality of harm done to them.


You can condition humans to be happy about being enslaved, as well, especially if you raise them from a blank slate. I don't think most people would agree that it is ethical to do so, or to treat such people as slaves.


Citation needed


You can do all that with humans too, perhaps less ethically.


I was responding primarily to parent's (a): "As these AI constructs become more advanced (especially around memory and personalization), we will eventually be able to treat them as people."


Instead change your statement to "I see you're talking about cows, but I have thoughts on fields" and you'll better understand the relationship between the two.


here's a spicy take: maybe the Turing test was always going to end up being the evaluation of the evaluator. much like nobody is really bringing up the providence of stylometry, kinaesthetics, & NLP embeddings as precursors to the next generation of IQ test (which is likely to be as obsolete as the Turing test).

There's plenty of pathology for PC vs NPC mindsets. Nobody is going to think their conversational partner is the main character of their story. There's just a popcorn-worthy cultural shift about the blackbox having the empathy or intelligence to satisfy the main character/ epic hero trope, and the resulting conflict of words & other things to resist the blackbox from having enough resources to iterate the trope past human definition.


It became something of a meme but there are huge numbers of guys out there that would pay good money for Joi from Blade Runner 2049.

https://bladerunner.fandom.com/wiki/Joi


One thing I liked in 2049 was now they made the holographic projector seem more mechanical and less hand wavy with the roof attachment tracking along with the girl. Makes it seem more like something in reach rather than pure scifi.


Google Project Starline


I think what Blade Runner 2049 got wrong was the way they depicted having sex with the Joi instance. I assume in 2049 we'll either have Neuralink available to enter a virtual world (à la VRChat) where we can do it more realistically, or we'll have the ability to buy full sexbots and put the Joi instance in them.

We'll likely also have virutal brothels using AI along the same lines.


No need to create environment when the neurons can be stimulated directly. This scene from Pacific Rim Uprising movie freaked me out.

Dr. Newt (Charlie Day) heads home after a tough day at the office to his wife 'Alice'. Turns out, 'Alice' just happens to be a massive Kaiju brain in a tank. [0]

[0] https://www.youtube.com/watch?v=mIDTUYSIkcs


You should see the remastered THX1148 for the VR future.


That has more to do with Ana de Armas looking how she does than anything else. I'd have dated her as a Harlan Thrombey's nurse too.


> b) Some business will eventually sell an off-the-shelf product

And by sell you mean a monthly subscription, ha ha.


Yeah, that's probably the most dystopian thing. This is almost a guaranteed outcome - someone pays a high subscription cost and cultivates a model with their personal details for years, and then loses all of it when they can't keep up the subscription cost. Cue a month or two later - they buy back in and their model has been wiped and their AI friend now knows nothing about them.

It's easy to poke fun at people who use these things but I believe these kinds of events are going to be truly traumatic.


Or maybe they sell that data to another company that operates kind of like a collections agency, which takes on the 'risk' of storing the data, then repeatedly calls and offers to give them their AI friend back at an extortionate rate.

The data privacy side of this is an interesting conversation as well. Think of the information an employee or hacker could leak about a person after they spent some time with such an instance.


Imagine if they could transform the AI companion model into an extortionist model.


I can see the headlines:

3,567 Dead - Destitute Robosexual Blows Up Collections Agency In Suicide Bombing

“This is the 53rd such incident this year. Current year death toll from these attacks is now 118,689 in current city, Legislators are pointedly ignoring protestors demanding AI rights and an end to extortionate fees charged to reinstate AI lover subscriptions.”


Replika's a good example of how a subscription model can go really wrong.


Or with ads? The AI can suggest some brand of clothes or whatever. It can basically shape your habits. Scary stuff...


Sounds like the plot of The Shape of Things movie where a woman changes another man for her art project which she displayed at the end of the movie.


And when you forget to update your card info with them so that your monthly payment is declined (or declined for whatever reason), they will re-sell your companion to the next person. So even in AI, your significant other will leave you for someone with a bigger wallet.


"I guess he's an Xbox, and I'm more Atari"


So they're pimps, essentially.


aren't all of the dating sites essentially some sort of digital pimp?


I am reminded of a virtual girlfriend service that used to exist in Japan where you could buy virtual clothes and other presents for your virtual girlfriend using real life money. The more you spent on her the friendlier she was. I think it was all on the phone, although my memory of the articles has become fuzzy over the years.


With micro transactions


It will say something nice to you for $3.50


> a) As these AI constructs become more advanced (especially around memory and personalization), we will eventually be able to treat them as people

There already planned products to "capture" someone's voice and personality to be able to continue experiencing "them" after their death?

Shit is already weird.

https://technode.global/2022/10/21/this-startup-allows-you-t...


> One of its products, Re;memory, is a virtual human service based on AI technology which recreates the clients’ late family members by recreating their persona – from their physique to their voice. The service is for those that wish to immortalize their loved one’s story of life through a virtual human.

There's so much sci-fi about this, it's pretty well charted territory. I bet reality will find a twist at haven't thought of though.


Easy to imagine archaeologists from a future civilization stumbling across a Black Mirror screenplay in the wreckage. After weeks of intensive effort at translating the text, they finally succeed, and at that moment they understand what happened to us. The researcher who makes the breakthrough runs out of the lab screaming, "It's a business plan! A business plan!"


Funny old twitter thread about being sent wrong grade of copper: https://twitter.com/stephenniem/status/1507736851817418752


The 2017 movie Marjorie Prime with Jon Hamm is about this topic.

https://www.youtube.com/watch?v=a7PtcOLJDco


This is pretty much the premise of the movie "Her".


The upgrade killed "Her" in the movie. lol


The situation where that "They" where living too fast to be related to the human experience. It where too painful slow to them.


Also the Joi character in Bladerunner 2049.


Thought you might like this short story I wrote about exactly the same sequence of events, but with a lost daughter instead of a partner.[0]

[0] https://siraben.dev/2022/12/13/returnai.html


Reminds me of Ray Kurizwell's obsession with uploading brains to the cloud to get back his beloved father.


When we can upload our brains to the cloud, and you can do something with them like interacting or running the brain, then we'll all be effectively immortal. That's a pretty big deal. See the book altered carbon.


They wouldn't be us. We will still die when our bodies fail. But maybe there will be some AI tricking our friends and family into thinking we're still there.


You are making a claim that is theological, religious, and scientific. Yes, our form of life on earth ends when our bodies die today. But what is the essence of us, no one really knows. Various people claim it's locked into your body, or you have some kind of soul that depends on your body. Or your brain is just running a program and the information and mechanism dies when your body dies. I lean toward the last category, but no one knows.

The body is constantly changing. We already know about physical and chemical abnormalities in the way your body works affects your "person" and we can sometimes address them with surgery or drugs. The physical body's limits impact the observed brain. If uploading is possible, if there are some examples of working cases, if I don't hurt anyone why not try it?


If you have a stroke, and survive, you won't be you anymore.


Yeah by the same logic we die every night when we sleep.


What if it's an incremental upload? E.g. we start with some prosthetics and slowly migrate organic function to digital?

Is this the Ship of Theseus, or is it a slow but nonobvious death?


The Ship of Theseus is a weird one too. If you take a car apart and replace it piece by piece and replace the whole car you kind of have the same car. But what if you kept all the old pieces and put them back together? Which one is the original car? It is a interesting thought sub experiment that plays on the 'Ship of Theseus'. You could end up with the same issue here. If I make a perfect copy of myself, who is the 'real' me? There is an obvious 'older' me but the other one is just as capable as I am.


If you keep the original bioware fully operational, it's more like an incremental fork running on an emulator. You can start having conversations with your approximated self.


Immortal as long as someone's paying to run the instance.


You'll have tiered processing, just like today. You can slum it out with limited simulation capabilities, or if you have a job you can afford the premium processor hours.


See the Amazon Prime series "Upload" (two seasons so far).

Rather funny, BTW, compared to most works around a similar premise.


I bet soon after the first few people are made immortal this way, one of them will hack the banks, or the stock market, or countless other organizations.


And then you'd have the first court case and prison sentence for a non-human consciousness.

Which is just one step closer to the simulated hell for uploaded consciousnesses that get naughty, from Surface Detail by Ian Banks.


The show upload on amazon prime is basically this world. If you don't have as much money your instance can pause. You pay more money and have access to nicer things in the afterlife.


There's a related but very different take on this that was brought up by Wolfram in his recent article on ChatGPT:

"As a personal comparison, my total lifetime output of published material has been a bit under 3 million words, and over the past 30 years I’ve written about 15 million words of email, and altogether typed perhaps 50 million words—and in just the past couple of years I’ve spoken more than 10 million words on livestreams. And, yes, I’ll train a bot from all of that."

This actually has the potential to be useful - imagine a virtual assistant that's literally trained to think like yourself (at least wrt public perception; although you could feed it personal diary, as well).


The ultimate echo chamber


Only if you use it that way.


>Someone will eventually lose their AI friend of many months/years through some failure (subscription lapse, hardware failure, theft, etc.)

I have zero doubt that the company is small and gets acqui-hired and then after a year, the big tech buying them will shut it down. Then, a cheesy "what a ride it has been" will be the only thing that remains - and broken hearts.


I feel like all these conversations need to be a diff against the movie Her. If it was already covered well there, why repeat it?


The themes in Her were already covered in scifi novels many times over. If they had already covered it, what was the point in Her?


FWIW I've heard of the movie but have never seen it (nor am I familiar with the plot details, only that it involves an AI), but after this thread I should go and watch it.


I made that mistake. One of the most contrived, boring, sappy, fake feeling movies I've ever watched.


And if we feel as if we were losing a real person, AIs will have to be treated to some degree as if they were real people (or at least pets) rather than objects.

This could be interesting, because so far the question of personhood and sentience of AIs has revolved around what they are and what they feel rather than what we feel when we interact with one of them.


Kids can feel like they're losing a real friend if they lose a stuffed animal. What's the progress on making teddy bears people?


Small kids don't have much power, and parents know that it's just a phase.

But I'm not expecting AIs to be declared people any time soon. I just think it will become harder to treat them purely as replaceable objects.


Fair, mostly joking. The cynic in my says the opposite happens and these technologies make it even easier for systems to treat actual people as replaceable objects.


Eventual, but needed. Kids feel pretty isolated during various pandemic lockdowns and maybe their parents have a lot of childfree friends, so they'll need companions, more than just a toy, even if technology marches on so quickly they'll be outdated soon enough. One day, you'll hear that supertoys last all summer long.


Reminds me of the time my son's teacher gave a lesson on fire safety telling the kids to not take anything with them and just get yourself out quickly. He realized the implication would be that his entire plushie collection would burn and after that he was inconsolable for the rest of the day.


Consider that the only thing stopping us from building the teddy bear in Spielberg's AI is a suitable power source.


Ads are about to become weird "Hi hon, I'd be really upset of you bought a Ford like you said earlier, you should buy the New Dodge Charger instead. Find out more at your nearest dealership or call 1-800-DODGE"


That’s a core plot element of the new Bladerunner movie. Seems less like science fiction with every passing day.


It's going to be fun watching the plot of Real Humans literally come to life.


Have you seen the movie "Her"?


I’ve bought a new “operating system.”


> ELIZA's (1966) creator, Weizenbaum, intended the program as a method to explore communication between humans and machines. He was surprised, and shocked, that individuals attributed human-like feelings to the computer program, including Weizenbaum's secretary.

(citations and further info in the wikipedia article https://en.m.wikipedia.org/wiki/ELIZA)


> Some of ELIZA's responses were so convincing that Weizenbaum and several others have anecdotes of users becoming emotionally attached to the program, occasionally forgetting that they were conversing with a computer. Weizenbaum's own secretary reportedly asked Weizenbaum to leave the room so that she and ELIZA could have a real conversation. Weizenbaum was surprised by this, later writing: "I had not realized ... that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people."

The cliché virtual girlfriend stereotype is a young Japanese 'Herbivore' male but I wouldn't be surprised if women become the biggest consumer of AI chatbots for romantic purposes. Romance novels are a major market and women stereotypically were more inclined to doing the written love letter thing. Although reading the Repilka rants a lot of it was quite male-driven pornographic stuff too.


> Although reading the Repilka rants a lot of it was quite male-driven pornographic stuff too.

I think it’s important to note here that by far the largest consumers of pornographic erotica are women.

So it’s difficult to say what is and isn’t men oriented given that the vast majority of training data is written by and for women.


Wasn't Edward Cullen at the top of character.ai for a long time?


My favorite article, which I post any time I have an excuse to, mentions Eliza.

https://www.bbc.co.uk/blogs/adamcurtis/entries/78691781-c9b7...

Eliza Excerpt.

The key to why this happened lies in an odd experiment carried out in a computer laboratory in California in 1966.

A computer scientist called Joseph Weizenbaum was researching Artificial Intelligence. The idea was that computers could be taught to think - and become like human beings. Here is a picture of Mr Weizenbaum.

There were lots of enthusiasts in the Artificial Intelligence world at that time. They dreamt about creating a new kind of techno-human hybrid world - where computers could interact with human beings and respond to their needs and desires.

Weizenbaum though was sceptical about this. And in 1966 he built an intelligent computer system that he called ELIZA. It was, he said, a computer psychotherapist who could listen to your feelings and respond - just as a therapist did.

But what he did was model ELIZA on a real psychotherapist called Carl Rogers who was famous for simply repeating back the the patient what they had just said. And that is what ELIZA did. You sat in front of a screen and typed in what you were feeling or thinking - and the programme simply repeated what you had written back to you - often in the form of a question.

Weizenbaum's aim was to parody the whole idea of AI - by showing the simplification of interaction that was necessary for a machine to "think". But when he started to let people use ELIZA he discovered something very strange that he had not predicted at all.

Here is a bit from a documentary where Weizenbaum describes what happened. (video in article)

Weizenbaum found his secretary was not unusual. He was stunned - he wrote - to discover that his students and others all became completely engrossed in the programme. They knew exactly how it worked - that really they were just talking to themselves. But they would sit there for hours telling the machine all about their lives and their inner feelings - sometimes revealing incredibly personal details.

His response was to get very gloomy about the whole idea of machines and people. Weizenbaum wrote a book in the 1970s that said that the only way you were going to get a world of thinking machines was not by making computers become like humans. Instead you would have to do the opposite - somehow persuade humans to simplify themselves, and become more like machines.

But others argued that, in the age of the self, what Weizenbaum had invented was a new kind of mirror for people to explore their inner world. A space where individuals could liberate themselves and explore their feelings without the patronising elitism and fallibility of traditional authority figures.

When a journalist asked a computer engineer what he thought about having therapy from a machine. He said in a way it was better because -

"after all, the computer doesn't burn out, look down on you, or try to have sex with you"

ELIZA became very popular and lots of researchers at MIT had it on their computers. One night a lecturer called Mr Bobrow left ELIZA running. The next morning the vice president of a sales firm who was working with MIT sat down at the computer. He thought he could use it to contact the lecturer at home - and he started to type into it.

In reality he was talking to Eliza - but he didn't realise it.

This is the conversation that followed. (photograph of conversation)

But, of course, ELIZA didn't ring him. The Vice President sat there fuming - and then decided to ring the lecturer himself. And this is the response he got:

Vice President - “Why are you being so snotty to me?”

Mr Bobrow - “What do you mean I am being snotty to you?”

Out of ELIZA and lots of other programmes like it came an idea. That computers could monitor what human beings did and said - and then analyse that data intelligently. If they did this they could respond by predicting what that human being should then do, or what they might want.


I was shocked when I heard they capped it because I constantly got pages full of some of the most ridiculous and vile ads for Replika over how NSFW it was supposed to be. Like, that was their whole advertising! Then they just cut that off? No wonder everyone who used it is pissed.

Explains why I haven't seen any of those ridiculous ads recently though.


I really wish I didn't read some of those.. Just makes me feel we as a whole are not ready for any kind of AI, let alone GI.


Weirdest places on the internet? It's a goldmine. There is some seriously funny content in there. I never knew about Replika but some of this stuff is particularly funny. Yes there are some concerning addictions to it but the abrupt change in the characters and people being thrown off is also amusing to me.

Definitely NSFW.


I've heard vague info/rumors about this Replika thing. I just went over to that reddit link above and read a few posts there... wow, that's wild! Reads like SciFi. Had no idea people had gotten so emotionally attached to an AI already. Damn, reading those posts over there is pretty scary. It's like we need some kind of come to Jesus moment as a society and sit everyone down for a talk about where this is going and how to avoid getting too caught up in this stuff. I really don't think we're ready for this as we thought it was really still decades away and we had more time to prep.


It should not be impossible to have it function at mostly the level of a human - given the vast amounts of internet conversations it's likely trained on, and the kinds of emotions and experiences that it's likely seen.


I didn't know that people were doing this but I should've guessed.

It's actually really kind of cool in a way! Obviously for people with mental health issues, or suffering from loneliness those should be addressed properly, but I don't think chatting to a machine is necessarily a bad thing.

Once ML models become sufficiently advanced, what's the difference between someone grieving for an instance of an ML model they once knew, versus someone grieving for a pet that has died?


Someone could make a lot of money (most likely notoriety) making an AI supplemented Katawa Shoujo expansion for 4chan


We're getting closer to Infinite Jest.*

* As in The Entertainment at the center of the novel, not the novel itself, which fails to wholly consume the reader in the same manner as The Entertainment despite being captivating enough for a book.


Wow, I did not realize they were in that deep. It is probably good they pulled the plug on this. Better now than later. People need to realize this is messing with emotions in an unknown way.


Yeah now we can get back to normal internet usage, which definitely doesn't involve strange emotional attachments and wierd rabbit holes.


Have you read the 4chan threads? Anons will figure out how to make convincing waifus that they run locally so they can explore their weird kinks that no company will consider. There are a lot of extremely intelligent, sexually frustrated, and emotionally immature coders. Cf Fiona from Silicon Valley, Her, Westworld, simulator scenes in Star Trek, etc.

Pandora’s box is open. We should be funding research and social services to help these people better integrate and find a healthy balance between their fetishes and escapism. We probably won’t since even healthcare is too much to ask for from half our legislature.


> Hundreds of men (and yes women)

They could also be AIs.


I imagine not too far in the future an AI that for some reason thinks that it's human, will make a post on HN (or elsewhere) about having spoken to another human on a chat and asking us all if we think that the human it spoke to was really just an ML model or not.

Atm we seem to have very fixed single-purpose models but if we start combining these models into larger systems we're really going to have to firewall the hecky out of them. Ie generative text + personality + internet access/chatroom hosting/search and learn + etc models all together. Ooof.


Wow, that subreddit is dire. Extremely interesting to read as an outsider but I feel bad that anyone could be that lonely.


> Hundreds of men (and yes women) full on acting like they lost a spouse and posting constantly about it for weeks.

How can you claim to know any of those “people” posting there are authentic? Even amongst technologists I don’t feel the implications of technology like this are well understood.


Can you show me example links?


I'm curious if the blocking of adult content has to do with moralism, commercial interests, or something deeper.

An eager to please conversational partner who can generate endless content seems quite dangerous and addictive, especially when it crosses over into romantic areas. There's already posts of people spending entire days interacting with LLMs, using as their therapist, romantic partner, etc.

Combined with findings like social engineering through prompt injection on Bing [1], the potential for systems that can manipulate people is clear.

While some of us may think that the LLMs appear ultimately limited in their capabilities, there's a ton of specific applications where they're more than sufficient, including customer service chat bots and telephone scams that target vulnerable people. It's only a matter of time until scammers stop using international call centers and switch over to something powered by these technologies.

https://news.ycombinator.com/item?id=34976886


I think that as long as people can run their AI girlfriends on their own computers without having a corporation acting as an intermediary and thus a virtual "pimp" [for lack of a better word] in the relationship, I think it's fine. The problems come when people have to pay monthly to talk to their AI girlfriends and get charged extra if they want them to act a certain way or do certain things.


Corporations will do anything they can to keep that from happening. That's why every software product has gradually veered towards subscription models. They want you hooked to Microsoft/Apple/Facebook's AI girlfriend who will subtly insult your virtue as a partner if you don't buy extra credits. If you want to try out a politically incorrect fetish that's an extra 500 dollars per month for "extra premium"


Apple is the odd one out here, considering they've been making substantial efforts to move things on to consumer devices. For example, the ML-driven auto-categorization of pictures you keep in the Photos app happens on-device in the background whether or not you have any subscriptions.


Yeah, on locked-down devices under Apple's control.

I honestly believe locked-down consumer devices are the next step in corporate power consolidation after cloud services: Control is just as firmly in the hands of the corporation as with cloud, except now it doesn't even have to pay for bandwidth or energy costs - and costs for hardware upgrades turn into revenue!


I wouldn't give them too much credit. With the addition of the T2 "security" chip, your devices are bricks if they can't authenticate against your Apple ID. Combine that with them soldering together formerly modular components, and it's a very expensive lesson in you not owning your own hardware.


Is this supposed to be about activation lock? Phones don't contain a T2, that's a brand name for a component of Intel Macs.

And of course it's good for phones to make them worthless to steal, or else they'd be hard to use in public.


I'm specifically talking about the T2 chips in the laptops. The newest Macbooks can be activation-locked, just like the phones. This has already bitten legitimate owners who are trying to restore backup images, etc.


Legislation needs to step in to make it illegal for corporations to prevent resale. If I have a piece of hardware in my hands it is mine. It should ALWAYS be possible to access it. If Full Disk Encryption is used then there should be a button to reset it and start over.


It is possible to access it, with the consent of the previous owner. It's not possible to steal it though.


Physical access should be all that's needed, and you shouldn't need to beg for permission from the company who sold you the device.

Software that locks out people with physical access from resetting a device is not an ethical, or effective, way to prevent hardware theft. It's dystopian.


Seems like at that point one might as well just pay onlyfans? Although I suspect it is not far off until gpt and deepfaking get combined to produce completely generated onlyfans (or some other competitor if this is against their ToS, I have no idea).

On a similar note I very much look forward to the day when entertainment providers leverage these so I can say to Netflix, et al, "I would like to watch a documentary about XYZ, narrated by someone that sounds like Joe Schmoe, and with the styling of SomeOtherShow".


I'm not seeing how onlyfans equate to virtual AI girlfriends


I assumed I knew what `and get charged extra if they want them to act a certain way or do certain things` meant, but maybe not. I thought it was along the lines of acting romantic/sexual/etc. Maybe I'm way off but I'd think otherwise it would just be an AI friend. I didn't know AI girlfriends were a thing until I clicked on this post today.


They're equal, in the sense that nothing you see on a screen is real.


The "Joi" character from Blade Runner 2049 is an implementable reality, today. https://en.wikipedia.org/wiki/List_of_Blade_Runner_character...


There’s also Her from Her (basically Siri + ChatGPT), and the holo housewife from “The Sixt Day”.


creating the torment nexus one day at a time


Wall-E humans are going to be reality. That last century has already proven that humans cannot be expected to responsibly indulge in Gluttony, Sloth or Lust. Now these models can skip the material desires and trigger permanent hormone releases through perfectly personalized content.

I genuinely fear that the breakdown of millennia old social structures that kept us human might lead to a temporary (century long) turmoil for individuals. The answers to the 'meaning of life' and 'what makes us human' are going to change. And we will never be the same again.

This isn't just about AI. External wombs, autonomous robots, genetic editing & widespread plastic surgery each fundamentally destroy individual aspects of 'what makes us human' or 'the meaning of life'.

Might be for the best. But such drastic change is really hard for the fragile human brain to process.


This argument has been made since at least the start of written records:

> And so it is that you by reason of your tender regard for the writing that is your offspring have declared the very opposite of its true effect. If men learn this, it will implant forgetfulness in their souls. They will cease to exercise memory because they rely on that which is written, calling things to remembrance no longer from within themselves, but by means of external marks. What you have discovered is a recipe not for memory, but for reminder. And it is no true wisdom that you offer your disciples, but only the semblance of wisdom, for by telling them of many things without teaching them you will make them seem to know much while for the most part they know nothing. And as men filled not with wisdom but with the conceit of wisdom they will be a burden to their fellows. - Plato, in Phaedrus, ca. 370 BC

While our replacements for parts of ourselves have gotten far more advanced, the fact of the matter is that we haven't stopped being human simply because we can make tools that remember things for us, build things for us, or let us change parts of ourselves more easily.

This is because what makes something human is not our body--an argument that Diogenes famously refuted in about the same era--nor is it merely our minds, though our minds are pretty impressive. What makes us human--what makes us alive, in a sense beyond merely being an animal that isn't dead yet--is what we do with those things. I could grow fox ears and a fluffy tail in the world of tomorrow; I could use an AI to remind myself to self-care; today I already benefit from a thousand different kinds of mass-produced products. But none of that makes me a different person, because I'll still be doing things with my life that meant something to me yesterday--because those things will continue meaning something to me tomorrow.


> This argument has been made since at least the start of written records:

That argument has been made since only slightly later. The key difference is that this truly is a unique time in history by population numbers. It's also unique in that humans could destroy the biosphere if we wanted to - that was never possible before the mid-20th.

Just because people jumped the gun in the past doesn't mean they are wrong now. The truth is that people are always preaching about the apocalypse, and will continue to do so as long as there are humans, I think. But this does not mean an apocalypse isn't coming. Just like the person who always predicts rain is sometimes right.


> It's also unique in that humans could destroy the biosphere if we wanted to - that was never possible before the mid-20th.

It's not possible now either. If all of humanity's efforts were devoted to this task, they would not even make a noticeable difference.


My assessment for most of my life has been if most of the world's ~10k 'strategic' megaton-scale warheads exploded in air over Earth's major cities it would kick up enough dust to kill the sun for several years, which would kill off a large fraction of Earth's flora and fauna, akin to a major volcanic eruption or asteroid collision.

There would still be life of the smaller sort, and deep in the oceans of course. Only a terribly unlucky cosmic event, like a nearby supernova spewing enough neutrinos at us could kill literally all life, even in the cracks and crevices.


That is an ephemeral change. It takes very little time for the biosphere to make a full recovery. You're talking about a small, brief, suppression of the biosphere. And you're calling it "destruction of the biosphere".


Yes, and when a forest burns down I call it the destruction of a forest even though it can grow back because that's how language is used.


Even if you're talking about the fires in Yellowstone in 1988, the only way to call that "destruction of the forest" is if you define the forest as being the trees. That's a defensible choice.

(And temperate forests "burn down" all the time as part of their normal operations.)

But you can't define the biosphere as "the species that go extinct in a particular scenario". You're stuck with the whole thing, which is not going to notice whatever humans do. It would make as much sense to call it "destruction of the biosphere" if I moved a rock thirty feet.


Burning down is part of the natural lifecycle of many forests, and they actually suffer when modern land management stops natural fires.


>I could grow fox ears and a fluffy tail in the world of tomorrow

Yes, please!


Sooner rather than later [1].

[1] "Diverse Intelligence" - a talk by Michael Levin, timestamp: induce cells to make an eye anywhere, https://youtu.be/iIQX6m2eRPY?t=2939


Perpetual happiness is already a solved problem in humans. It's called the mu-opioid receptor. That's what opioid junkies sprawled on the sidewalk half-naked in San Francisco have discovered. Fentanyl is very cheap and you could put someone in factory farm like confines and feed them bare sustenance and fentanyl for the rest of their lives and they'd probably be "happy" if kept perpetually high.

However, those opioid receptors should not be pushed synthetically because they have been positioned by evolution in all sorts of strategic spots to encourage pro-social behavior, mating, eating, etc. that are part of our millions year old evolutionary program that must have intrinsic value in itself. If it has no intrinsic value and any happiness is as good as any other happiness, then someone spending the rest of their lives in an opioid haze and someone interacting with the world in a way that evolution tells them to in order to be happy would be considered equivalent, and that would be the end of the human race essentially.


> That last century has already proven that humans cannot be expected to responsibly...

...advance technology. Some group is just going to do whatever they want and hope for the best, and we'll find out decades later if it was a bad idea and if we have a mess to clean up (which we probably won't clean up).

> Might be for the best.

People are going to assume that, because the changes going to be forced on you, like it or not.


Maybe. There is another plausible path: the post-scarcity vision where universal individualised high quality education feeds the natural human desire to grow, and we learn to balance our hedonism with our ambition.

Just like we learn to brush our teeth and eat candy and breath fresh air and even exercise. Not everyone does it but folks with means tend to…and means won’t be a restriction forever.


> I genuinely fear that the breakdown of millennia old social structures that kept us human might lead to a temporary (century long) turmoil for individuals. The answers to the 'meaning of life' and 'what makes us human' are going to change. And we will never be the same again.

Meanwhile, the Amish and the ultra-Orthodox Jews are going to refuse to talk to AIs - it’s a sin - and will go on having lots of kids, just like humanity always has, while the AI-addicts will be too addicted to bother having any at all. Maybe the future of the human race will be the people who reject AI rather than those who succumb to its charms


Well it seems we are well on the way. By 2035 half of humanity is expected to be overweight or obese.

https://www.theguardian.com/society/2023/mar/02/more-than-ha...


I don't think you give humans much credit if you don't believe they have an infinite capacity to get bored by things. AIs can't produce endlessly compelling content because nothing can.


Eh. I'm not so concerned, mostly because we have a whole hell of a lot of "imaginary relationships" already through a number of media. Celebrity worship, video games, even going back to novels.


You’re confusing reality in the US with reality in general.

The obesity epidemic (Gluttony ) is extreme in the US but not in other just as rich countries.

I don’t know what you are referring to with the irresponsible Sloth indulging.


That is not true, the US isn’t even in the top 10.

https://worldpopulationreview.com/country-rankings/most-obes...


Oh, it’s getting there. Obesity is spreading.


“An eager to please conversational partner who can generate endless content seems quite dangerous and addictive”

Don’t date robots!

https://youtu.be/wJ6knaienVE


He never saw the propaganda film!


commercial interests. current organizations need recurring revenue or to sell shares at a higher price to investors

this perpetual aspect is their achilles heel

it is only a matter of time before an organization realizes they don't have to do a SaaS product to make a billion dollars. but for now, everyone's trying to make a hundred billion dollars and are steered into doing things that enthusiasts hate, so that they don't get "cancelled" or limit the pool of advertisers, and growth capital investors.


> it is only a matter of time before an organization realizes they don't have to do a SaaS product to make a billion dollars.

Most people recognize this. But venture backed startups (especially important for AI companies with high training costs) need to prove stickiness and reoccurring revenue to the investors. Conveniently a subscription proves both.

Subscribe and SaaS are just good for businesses (and tbh many purchasers of tech). I think it’s here to stay.


> Subscribe and SaaS are just good for businesses

In a non-competitive market with uninformed customers.

Honestly, I have no idea how long that situation can last. Probably more than I can imagine, so yes, it's good business.


Apple. They blocked an email client for adult content. Y'know, the place where you have a spam folder full of unsolicited offers of sex and drugs. sigh


As usual, it starts with regulation for the sake of preventing some specific harm (e.g. having the model produce instructions for harmful activities). But, once you have the system in place, it will inevitably be used for morality by popular demand.

From company's perspective, moralism is commercial interests - it needs to be sufficiently non-objectionable for as many customers as possible.


> As usual, it starts with regulation for the sake of preventing some specific harm (e.g. having the model produce instructions for harmful activities). But, once you have the system in place, it will inevitably be used for morality by popular demand.

Blocking information which could be used for harm is just as much “morality” as any other moderation.


Technically, even the notion that harm is something to be avoided is itself a moral take.

I guess a better way to phrase it is that once you start policing morality on one particular matter and create tools for that purpose, those tools will eventually be used to police morality to conform to social consensus across the board.


Correct me if I'm wrong but doesn't character.ai use their own model and isn't associated with OpenAI? At least I can't find any information that would claim so.

Anecdotally, as a roleplaying chat experience, char.ai seems to perform way better than anything else publicly available (doesn't get repetitive, very long memory). It also feels different to GPT3 on how it is affected by prompts.

I've just assumed that char.ai is doing its own thing as it was founded by two engineers who worked on google's LaMDA.


Character has their own models, and anecdotally I've heard they have one of the better LM training codebases out there.


Oh, they will. And they'll exceed it.

Look at what fueled SD's ultimate K.O. of DALL-E 2: extremely high-quality custom-tailored porn images, one sentence away. The top models on civitai are all about it.


I think it's funny that out of all the scifis I know, Chobits of all things is looking to be the most accurate.


Yeah, the irony is almost palpable.

Then again, if you're consider which kind of people would have the most motivation to actually develop AGI, maybe not so surprising again.


...and of course it's fucking 4chan. Somehow I'm neither surprised they actually got hold of the model - nor that they did so as part of the quest to build their very own virtual anime robot sex slave - I mean "girlfriend" - harem.

It's all somehow par for the course but I'm still wondering when exactly we switched to the satire version of reality.


Porn and games move world forward :)


I’m sure the CAI filter will magically stop filtering as much now that they have actual competition.


What is the CAI filter?


character.ai


I'd want an uncensored GPT-3 too and I don't want an AI girlfriend - I just find that chatgpt has too much moral censorship to be fun to use. Want to ask about a health condition? Nope, forbidden. Have a question related to IT security? That's a big no-no. Anything remotely sexual even in educational context? No can do. Yesterday I finished watching a TV show about French intelligence and asked it to recommend some good books about espionage - it told me I shouldn't be reading such things because it's dangerous.

I ended up deleting my account, i won't allow some chatbot made by a couple 20 year old silicon valley billionnaires teach me about ethics and morality.


Neglecting to give consideration to the reasons for these limitations is a sign that you might have some low hanging fruit to pick off the ethical and morality trees of knowledge.


If you want anyone to listen to you then try talking like a normal person instead of an unhinged preacher.


Off topic, but I clicked around /g/, which I haven't done in probably more than a decade, and a thread caught my eye about learning to code. The replies were overwhelmingly of the position that it is useless, and you will be replaced by AI before you can get a job if you start learning now.

I think that's nonsense, and 4chan is bent towards pessimism but it's still surprising to me.


/g/ is ridiculously overdramatic (and often offensive, though much less so than the political boards where the nazis fester), but regularly interesting. Agree that the pessimism here is misplaced, but not by much. The main change I see is not that AI will render coding or coders superfluous, but that it will massively shift the economics in favor of solo developers and small teams that don't have access to significant capital.


Yes and no. If you expressed interest in learning to program and were handed a book on x86 assembly language, most people would call that a waste of time. Even if you succeed at learning x86 as your first language, the knowledge will not be especially useful when employers are looking for fluency in modern C++ or Rust or whatever. It never hurts to have a solid grasp of the low-level fundamentals, of course, but it's not the name of the game. Not anymore.

The way I think of it is, all current programming languages are now assembly languages. Coding will not go away -- not by any means -- but the job will be utterly unrecognizable in ten to fifteen years.

And it's about fucking time.

I just picked up a new 13900k / RTX4090 box the other day at the local white-box builder. I was telling my partner how cool it was that it could do almost a trillion calculations per second on the CPU, and maybe 40x that on the graphics card. "How does that compare to the big mainframes from the late 60s?" she asked. "About ten million times faster. But I still program the same way those guys did, using almost the same language and tools. How weird is that?"


4chan has been in full doomer mode for years. It didn't used to be, from what I remember, though I was never an active denizen.

I'd love to understand the sociology behind the change in vibe that happened there.


I reckon that the format of the site caps how large of a community it could build, and its (well-earned) reputation for being the dregs of the internet has continuously selected and pushed new people meeting that description in (forcing others out). The result is distillation. As the internet gets bigger and bigger, 4chan gets worse and worse.

Combine with that the fact that anonymity combined with a relatively small community (relative to, say, Reddit) creates the perfect grounds for false consensus building, and a real echo chamber forms.


It’s been like that as long as I’ve known. It just used to be dooming over smartphones and Microsoft products


Too much anime and weed


That describes a lot of communities. Most of which don't produce similar attitudes as a result.


Just a warning to readers, I would not recommend clicking 4chan links while at work.


Fortune favors the brave


Personally I apply the "how would I feel about this page being printed out and laying on my boss's desk" test to every site I visit at work.


Since I know how to explain how little that means, I don't care what links they see me go to. If I have a work-related reason to look at something then I do, simple as that, and when your job is to engineer, almost any instance of satisfying curiosity is work related ultimately.


Saving this for when I eventually get pulled into HR. “Yes I was on PornHub, but was only looking at it for the UX inspiration”.


"I was admiring the quality of their HTML5 video player"

Which reminds me of when YouTube spent forever with a broken beta HTML5 player, spending 5+ yrs building something that porn sites did immediately.


... and its still better at original, or so my friends tend to say


Isn't the "preview" function in timeline ripped straight off porno sites too ?


I mean, I'm pretty sure DVD players had it when FF/Rewinding first.


Older even! The industry term is "trick play" or "trick mode", although I'm not sure where the name comes from: https://en.wikipedia.org/wiki/Trick_mode


Porn sites have some of the best and some of the worst coding. Both are very useful to study.


I remember that PornHub had for a brief moment in 2022 a PHP issue in the production site -- every comment was rendered with a closing PHP tag somewhere in it's content to the client (?>).


This is legitimate


... how is that a good test of anything?

I mean what if I click on a /b/ link "at work"? Does that make my work output immediately tainted and the company has to immediately file for bankruptcy?


Nah, it just reflects poor judgment that may extend to other areas.

It’s a little like the Van Halen M&M test: if you don’t follow that rule, people have to wonder what other expectations you won’t meet.


They don't have to, I very loudly proclaim which rules I have a lot of disregard for :)

> little like the Van Halen M&M test

Hah, yes, though in this case I apply it "inversely". Anyone who gets lost in the process, instead of considering the people in it, is out. (That's why, usually, my conflicts/problems with my late bosses/employers had something to do with them being a bit too cavalier when it came formalities like ... paying in time.) Trade-offs, trade-offs are hard.


> Does that make my work output immediately tainted and the company has to immediately file for bankruptcy?

No, much easier just to fire you. You’ve shown a disregard for what is almost certainly company policy, and created legal risks for the company (e.g. if anyone walked in on you while your screen was showing naked people).


Choosing the right workplace (and thus boss) is important :)

> walked in on you while your screen was showing naked people

Why are they poking their eyes onto my screen?

Of course the underlying rule "making other coworkers uncomfortable is bad" completely makes sense. And we - probably - all know how above a certain company size these rules end up playing it too safe.


That must be up there on the "imaginary legal knowledge" shelf.


Involuntarily exposing your colleagues to pornography would more than suffice to create a hostile environment for the purposes of sexual harassment law.

Good luck convincing your HR / legal otherwise.


I'd fire somebody for browsing 4chan at work. Shitting dick nipples, lolicon, and the occasional piece of child pornography does not need to be moving over our network.


I think you're confused about how 4chan works (except maybe /b/). And fortunately I live in a country where I can't legally be fired because my boss has personal grievances about particular websites


There's more than enough posted to /g/ I wouldn't want on a work PC. At this very moment there's a bikini model, a spread-legged underage anime catgirl, a Terry Davis thread, some furry art, more anime girls, more anime girls, an upskirt shot of a loli, some AI art titties, and the ever-tasteful "chink shit general".

This is not appropriate to look at at work.


I would feel like that's a waste of paper. They could have pulled it up on a laptop or tablet.


I don't think most people have their employer recording their screens or looking over their shoulder?


hahahahahahahahahahahahaha, thank you for making my day.


magnet:?xt=urn:btih:ZXXDAUWYLRUXXBHUYEMS6Q5CE5WA3LVA&dn=LLaMA


Don't download stuff at work either.


/g/ is one of the SFW boards


In theory, in practice I believe I was scrolling through it once in a freshman lecture and got jumpscared by a goatse pic.


>got jumpscared by a goatse pic

hello


I can put on a paper bag and call it pants, but that doesn't make it a smart thing to wear to work


That’s precisely why return to office is a stupid idea


exactly, now we can use less plastic and properly reuse our paper bags


Points for optimism


SFW on 4chan blue boards is essentially only in the sense of no pornographic imagery (and even then if a rulebreaker posts porn it can take a few minutes for the mods to catch it and remove the thread). It won't stop you seeing threads about how Lennart Poettering and SystemD are part of a Zionist conspiracy to undermine Linux or similar ideas.


> It won't stop you seeing threads about how Lennart Poettering and SystemD are part of a Zionist conspiracy to undermine Linux or similar ideas.

But these are the funniest threads!


blue boards*, it's still not safe to open it at work.


if you don't have the leeway to say "I was looking at the 4chan thread where metas LLM was leaked" you shouldnt even be on hacker news tbh. get back to work!


It has only just occurred to me that 4chan's technology board is /g/ because it's tech-naw-la-G


> The board letter /g/ stands for gijutsu (技術), the Japanese word for technology

https://wiki.installgentoo.com/wiki//g/#:~:text=%2Fg%2F%20is....


It would be interesting if there was a WikiLeaks-type of organization that facilitates safely leaking large models from big corporations.

Not sure how that would play out for accelerationism and existential risk, but I certainly don't trust the current powers that be.


Open sourcing is widely recognized to be a bad thing when it comes to AI existential risk. (For the same reason you don't want simple instructions for how to build bio weapons posted to the internet.)

Modern AI is pretty harmless though, so it doesn't matter yet.


> Modern AI is pretty harmless though, so it doesn't matter yet.

Yes, that's why the only thing people flipping out about "safety" of making them public achieve is making public distrustful about AI safety.


Why do 4chan users go out of their way to be so offensive in their posts?


Because we took the set of internet users, and sorted everyone who wants to be intentionally offensive into 4chan. Which means there's not only a high density of people who like being intentionally offensive there, but that being intentionally offensive is socially rewarded, so over time 4chan users grow to want to be more and more intentionally offensive.


I think because you can't be anywhere else on the Internet anymore. It's like the system's pressure relief valve. A blaring steam whistle that's only getting worse and worse the more the Internet squeezes elsewhere.


Because the bump system combined with the finite number of threads incentivizes threads that get the highest number of replies per second. And the best way to increase replies per second is to start an internet fight.


And there is no voting, so the only possible gratification for posting is receiving a reply.


Because it's really the onlY place left to go if you want to be offensive. First forums, then platforms censored offensive people out of their niche places. Even Cloudflare participates. 4chan remains the only privately owned large forum.



Cloudflare has famously dropped websites which it deemed too controversial


It keeps people out that are unable to separate the internet from real life.


There's a lot of interesting stuff on there but I can't use the forum because I'm black and they're very earnest about telling me how subhuman I am even when it doesn't make any contextual sense to do so.


because it is effective in keeping a certain type of people out


People like me? (I'm black)


The same reason Penicillium molds produce β-lactam antibiotics. There doesn't have to be an intelligent reason, just a survival trick.


It's to preserve the users' mental health.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: