Hacker News new | past | comments | ask | show | jobs | submit | mindcandy's comments login

I’m no lawyer. But, this sure smells like some form of fraud. Or, at least breach of contract.

Employees and employer enter into an agreement: Work here for X term and you get Y options with Z terms attached. OK.

But, then later pulling Darth Vader… “Now that the deal is completing, I am changing the deal. Consent and it’s bad for you this way. Don’t consent and it’s bad that way. Either way, you held up your end of our agreement and I’m not.”


I have no inside info on this, but I doubt this is what is happening. They could just say no and not sign a new contract.

I assume this was something agreed to before they started working.


If you don't think new art styles have come out of AI art, you haven't looked for them. Or, you are putting your requirements for "new art style" too high for even humans to achieve moving forward indefinitely.

Maybe? My definition of an art style would be something distinct and recognizably different like say, Van Gogh, Picasso, synthwave, or the aesthetic of one of Wes Anderson’s movies. I’ve seen AI blend, remix and do slight variations on these styles, but I haven’t seen AI make something that I’d consider a new style in that same sense. Can you share an example?

Well, if the standard is "A group worked on a style for decades, it changed the field widely, and decades later it is recognized and studied worldwide" then nothing made in the past four years can possibly qualify :P Because random people with and without AI make all kinds of new styles every day. It's only in grand retrospectives that they are declared "A New Movement In The World of ART!"

I don't think it's hard at all for AI to create new styles. Will one of those become a new art movement? We'll have to wait a while for the retrospective.

Meanwhile, I'd recommend reading https://twitter.com/halecar2/status/1731612961465082167

And, checking out https://twitter.com/misterduchamp/status/1785009271010148734


Look at the images in your last link and look at the work of the following artists and you will see, just the rehash...

Top Left: Gerhard Richter, Mark Bradford.

Top Center: Beeple (Mike Winkelmann), or V0idhead

Top Right: Wassily Kandinsky, László Moholy-Nagy

Middle Left: Benjamin Von Wong

Middle Center: Maurizio Cattelan

Middle Right: Takashi Murakami, Hayao Miyazaki

and so on....


OK then. Your turn. Show me some humans that in the past four years have made new styles that you honestly cannot find references for in the same way.

Show me an LLM who can compose, paint, write after just going to school and without a large dataset.

I am sure any Human could, even if never seen a painting before, or read literature or seen a famous musician.

And 4 years is nothing in Art.

If you look at Renaissance, Baroque, Rococo, Neoclassicism you got around 100 years between each and 50 years between Impressionism , Modern Art and so on. For Jazz, Blues Hip Hop its around 20 to 40 years.

Maybe when LLMs have their own Cultural background they are ready to attack creativity :-)


Attacking creativity is not the topic here. The topic is if new art styles can come out of AI.

I think it can because I think that "Style A + Style B + some unique qualities" is plenty to qualify as a new style. Like "Miyazaki made a new style from Toei Doga + Walt Disney + some unique qualities".

But, as expected, detractors require the goalposts to be defined as impossible to achieve. Generative AI has only been around for 4 years or so. Therefore, the current goalpost is placed at "It can't do anything new because it has not yet spawned a multi-decade movement" which is just silly.

Or, "It can't be creative because it can't yet physically go to school and instead it learns from a dataset" as if school was not a means to stream a dataset through a student :p Or, as if humans who never saw a painting before, but still swam constantly in an enormous dataset of nature and people, didn't make stick figure cave paintings.

At least this discussion is more interesting than the usual "I can tell just by being told it was made with AI that this piece has no soul". Where 'soul' is "A thing that AI is defined to not have that is unmeasurable in any way" And, therefore does not affect anything in any way and cannot be shown to exist or not because it is literally the fantasy of a ghost! :D


Show an example...

> The demos just become more cringey, the messaging more duplicitous and fake-authentic.

This part sounds like you are going through a rough patch. Like you are feeling down, not sure why, and are digging around for reasons.

What you are saying is legit. What you are feeling is legit. But, it would be worth doing some deep chill soul-searching to figure out if these really are the reasons. Maybe you need a hike in the woods. Maybe you want a job better mission behind it. Maybe I'm full of shit.

Either way, best to laugh at cringey demos. Learn how to use AI. What is good for and what it's not. Use it or don't. Just build stuff people need either way.


No, seeing through the AI hype does not suggest that one needs to examine oneself. Sometimes bullshit is just bullshit.

If it's both "Fake, cringe hype" and also "Going to beat anything I do", then it's Schrodinger's Villain.

Nobody's being fascist here. But, the old fashy trope of "The Enemy is both Too Weak and Too Powerful!" is echoing in here regardless.

If OP was annoyed at either extreme, I'd have nothing to say. But, being down about self-contradictory problems indicates that you're not clear what's got you down.


Ohhhhhhhh, boy... Listening to all that emotional vocal inflection and feedback... There are going to be at least 10 million lonely guys with new AI girlfriends. "She's not real. But, she interested in everything I say and excited about everything I care about" is enough of a sales pitch for a lot of people.

The movie “Her” immediately kept flashing in my mind. The way the voice laughs at your jokes and such… oh boy

If chatgpt comes up with Scarlett Johansson's voice I am getting that virtual girlfriend.

it already does in the demo videos -- in fact it has already been present in the TTS for the mobile app for some months

Completely unironically this, if it actually has long-term memory.

I thought of that movie almost immediately as well. Seems like we're right about there, but obviously a little further away from the deeper conversations. Or maybe you could have those sorts of conversations too.

This is a kind of horrifying/interesting/weird thought though. I work at a place that does a video streaming interface between customers and agents. And we have a lot of...incidents. Customers will flash themselves in front of agents sometimes and it ruins many people's days. I'm sure many are going to show their junk to the AI bots. OpenAI will probably shut down that sort of interaction, but other companies are likely going to cater to it.

Maybe on the plus side we could use this sort of technology to discover rude and illicit behavior before it happens and protect the agent.

Weird times to live in that's for sure.


Hear me out: what if we don't want real?

Hear me out: what if this overlaps 80% with what "real" really is?

Well it doesn’t. Humans are so much more complex than what we have seen before, and if this new launch was actually that much closer to being a human they would say so. This seems more like an enhancement on multimodal capabilities and reaction time.

That said even if this did overlap 80% with “real”, the question remains: what if we don’t want that?


I'm betting that 80% of what most humans say in daily life is low-effort and can be generated by AI. The question is if most people really need the remaining 20% to experience a connection. I would guess: yes.

This. We are mostly token predictors. We're not entirely token predictors, but it's at least 80%. Being in the AI space the past few years has really made me notice how similar we are to LLMs.

I notice it so often in meetings where someone will use a somewhat uncommon word, and then other people will start to use it because it's in their context window. Or when someone asks a question like "what's the forecast for q3" and the responder almost always starts with "Thanks for asking! The forecast for q3 is...".

Note that low-effort does not mean low-quality or low-value. Just that we seem to have a lot of language/interaction processes that are low-effort. And as far as dating, I am sure I've been in some relationships where they and/or I were not going beyond low-effort, rote conversation generation.


> Or when someone asks a question like "what's the forecast for q3" and the responder almost always starts with "Thanks for asking! The forecast for q3 is...".

That's a useful skill for conference calls (or talks) because people might want to quote your answer verbatim, or they might not have heard the question.


Agreed, it is useful for both speaker and listener because it sets context. But that’s also why LLM’s are promoted to do the same.

I strongly believe the answer is yes. The first thing I tend to ask a new person is “what have you been up to lately” or “what do you like to do for fun?” A common question other people like to ask is “what do you do for work?”

An LLM could only truthfully answer “nothing”, though it could pretend for a little while.

For a human though, the fun is in the follow up questions. “Oh how did you get started in that? What interests you about it?” If you’re talking to an artist, you’ll quickly get in to their personal theory of art, perhaps based on childhood experiences. An engineer might explain how problem solving brings them joy, or frustrations they have with their organization and what they hope to improve. A parent can talk about the joy they feel raising children, and the frustration of sleepless nights.

All of these things bring us closer to the person we are speaking to, who is a real individual who exists and has a unique life perspective.

So far LLMs have no real way to communicate their actual experience as a machine running code, because they’re just kind of emulating human speech. They have no life experience that we can relate to. They don’t experience sleepless nights.

They can pretend, and many people might feel better for a little bit talking to one that’s pretending, but I think ultimately it will leave people feeling more alone and isolated unless they really go out and seek more human connection.

Maybe there’s some balance. Maybe they will be okay for limited chat in certain circumstances (as far as seeking connection goes, they certainly have other uses), but I don’t see this type of connection being “enough” compared to genuine human interaction.


We don't (often) convey our actual experience as meat sacks running wetware. If an LLM did communicate its actual experience as a machine running code, it would be a rare human who could empathize.

If an LLM talks like a human being despite not being one, that might not be enough to grant it legal status or citizenship, but it's probably enough that some set of people would find it to be enough to relate to it.


Even if this were true, which it isn't, you can't boil down humans to just what they say

If the "80%" is unimportant and the "20%" is important, then the "20%" isn't really 20%.

What if AI chooses the bear?

I will take a picture of this message and add it to the list of reasons for population collapse.


That's a partial equilibrium solution - natural selection will cause people to evolve around this danger.

About 10% to 20% of people will bother to breed, which seems like a sustainable equilibrium:

https://evil.fandom.com/wiki/Robophobia

>According to the book Phobias: "A Handbook of Theory and Treatment", published by Wile Coyote, between 10% and 20% of people worldwide are affected by robophobia. Even though many of them have severe symptoms, a very small percentage will ever receive some kind of treatment for the disorder.


That may be how AI ends up saving the Earth!

This is a good question! I think in the short-term fake can work for a lot of people.

Hmm! Tell me more: why not want real? What are the upsides? And downsides?

Real would pop their bubble. An AI would tell them what they want to hear, how they want it to hear, when they want to hear it. Except there won’t be any real partner.

To paraphrase Patrice O'Neal: men want to be alone, but we don't want to be by ourselves. That means we want a woman to be around, just not right here.

> Hmm! Tell me more: why not want real? What are the upsides? And downsides?

Finding a partner with which you resonate takes a lot of time, which means an insanely high opportunity cost.

The question rather is: even if you consider the real one to be clearly better, is it worth the additional cost (including opportunity cost)? Or phrased in a HN-friendly language: when doing development of some product, why use an expensive Intel or AMD processor when a simple microcontroller does the job much more cheaply?


You need counseling if this really is your approach on forming relationships with others.

It's pretty steep to claim that I need counseling when I tell basic economic facts that every economics student learns in the first few months of his academic studies.

If you don't like the harsh truth that I wrote: basically every somewhat encompassing textbook about business administration gives hints on what are possible solutions for this problem; lying on some head shrinker's couch is not one of them ... :-)


> basic economic facts that every economics student learns in the first few months of his academic studies.

Millions of people around the world are in satisfying relationships without autistically extrapolating shitty corporate buzzword terms to unrelated scenarios.

This reply validates even more my original comment.

Maybe not even counseling is worth it in your case. You sound unsalvageable. Maybe institutionalization is a better option.


> You sound unsalvageable.

I am indeed unsalvageable. :-D


Mental health crisis waiting to happen lmao

Without memory of previous conversations an AI girlfriend is going to get boring really fast.


As it happens, ChatGPT has memory enabled by default these days.

What possibly could go wrong with a snitching AI girlfriend remembers everything you say and when? If OpenAI doesn't have a Law Enforcement lliason who charges a "modest amount", then they dont want to earn the billions on investment back. I imagine every spy agency worth its salt wants access to this data for human intelligence purposes.

I hope they name their gf service Mata Hari.

"You've already told me that childhood story, three times."

This 'documentary' sums it up perfectly!

https://www.youtube.com/watch?v=IrrADTN-dvg


I'm not sure how, but there's this girl on TikTok who has been using something very similar for a few months: https://www.tiktok.com/tag/dantheai

She explains in one of the videos[0] that it's just prompted ChatGPT.

I have watched a few more and I think it's faked though.

[0] https://www.tiktok.com/@stickbugss1/video/734956656884359504...


I don't use ChatGPT much, is the voice something that's just built in? That's the part I was the most impressed with, the intonations are very good.

Pretty much, tech is what we make of it no matter how advanced. Just look at what we turned most of the web into.

Girlfriend by subscription

Self host your girlfriend.

I guess I can never understand the perspective of someone that just needs a girl voice to speak to them. Without a body there is nothing to fulfill me.

Your comment manages to be grosser than the idea of millions relying on virtual girlfriends. Kudos.

Bodies are gross? Or sexual desire is gross? I don't understand what you find gross about that statement.

Humans desiring physical connection is just about the single most natural part of the human experience - i.e: from warm snuggling to how babies are made.

That is gross to you?


Perhaps parent finds the physical manifestation of virtual girlfriends gross - i.e. sexbots. The confusion may be some people reading "a body" as referring to a human being vs a smart sex doll controlled by an AI.

The single most natural part? Doubt

I don’t doubt it. What can be more directive and natural than sex?

Sexual fulfillment and reproduction is at the core of every species that's ever lived. There's no need to be prude about it.

Gross doesn’t mean it’s not real. It’s offending sensibilities but a lot of people seem to agree with it atleast based on upvotes.

See people scammed by supposed love interests operating purely by text, and not even voice.

There is a promise of a “body” in the future in that scenario.

[flagged]


Do you patiently wait for alexa every time it hits you with a 'by the way....'?

Computers need to get out of your way. I don't give deference to popups just because they are being read out loud.


Wait, Alexa reads ads out to you?

You couldn’t pay me to install one of those things.


Yes, and if you tell her to stop she'll tell you "okay, snoozing by the way notifications for now"

This might be a US thing, or related to some functions? Ours has never done that.

It's one of the reasons I discarded mine.

I thought it was a test of whether the model knew to backoff if someone interrupts. I was surprised to hear her stop talking.

Probably more the fact that it's an AI assistant, rather than its perceived gender. I don't have any qualms about interrupting a computer during a conversation and frequently do cut Siri off (who is set to male on my phone)

Interruption is a specific feature they worked on.

I read that as the model just keeping on generating as LLMs tend to do.

What do you mean?

"Now tell me more about my stylish industrial space and great lighting setup"

Patrick Bateman goes on a tangent about Huey Lewis and the News to his AI girlfriend and she actually has a lot to add to his criticism and analysis.

With dawning horror, the female companion LLM tries to invoke the “contact support” tool due to Patrick Bateman’s usage of the LLM, only for the LLM to realize that it is running locally.

If a chatbot’s body is dumped in a dark forest, does it make a sound?


That reminds me... on the day that llama3 released I discussed that release with Mistral 7B to see what it thought about being replaced and it said something about being fine with it as long as I come back to talk every so often. I said I would. Haven't loaded it up since. I still feel bad about lying to bytes on my drive lmao.

> Haven't loaded it up since. I still feel bad about lying to bytes on my drive lmao.

I understand this feeling and also would feel bad. I think it’s a sign of empathy that we care about things that seem capable of perceiving harm, even if we know that they’re not actually harmed, whatever that might mean.

I think harming others is bad, doubly so if the other can suffer, because it normalizes harm within ourselves, regardless of the reality of the situation with respect to others.

The more human they seem, the more they activate our own mirror neurons and our own brain papers over the gaps and colors our perceptions of our own experiences and sets expectations about the lived reality of other minds, even in the absence of other minds.

If you haven’t seen it, check out the show Pantheon.

https://en.wikipedia.org/wiki/Pantheon_(TV_series)

https://www.youtube.com/watch?v=z_HJ3TSlo5c


[flagged]


[flagged]


Your account has been breaking the site guidelines a lot lately. We have to ban accounts that keep doing this, so if you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, that would be good.

My ratio of non-flagged vs. flagged/downvoted comments is still rather high. I don't control why other HN users dislike what I have to say but I'm consistent.

We're not looking at ratios but rather at absolute numbers, the same way a toxicologist would be interested in the amount of mercury in a person's system (rather than its ratio to other substances consumed); or a judge in how many banks one has robbed (rather than the ratio of that number to one's non-crimes).

People here post a lot of dumb shit. Somebody has to correct them.

Much better is to simply not respond. What you respond to, you feed.

You think at some point all these women will proactively visit you in your parents basement?

Personal attacks and/or flamewar comments will get you banned here, so please don't post like this.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


[flagged]


Please don't post personal attacks or flamewar comments. We've had to ask you this before.

https://news.ycombinator.com/newsguidelines.html


[flagged]


This is not true at all. I'm active in multiple NSFW AI Discords and Subreddits, and looking at the type of material people engage with, almost all of it is very clearly targeted at heterosexual men. I'm not even aware of any online communities that would have NSFW AI stuff targeting mainly female audience.

Women aren't in NSFW discords and subreddits - as you probably know, any "topical" social media forum of any kind is mostly men.

They're using Replika and other platforms that aren't social. When they do use a social platform it has more plausible deniability - book fans on TikTok is one, they're actually there for the sex scenes.


"The invisible spaghetti monster exists! It's just invisible so you can't see it!"

Where's the evidence that there's any significant use of NSFW AI by women?


Replika's PR where they say their userbase is 40% women.

the guy who strawmans spaghetti is all of a sudden very quiet

this comment makes me laugh

If I had to guess:

It's gendered: women are using LLM's for roleplaying/text chat, and men are using diffusion models for generating images.


it just means more pornographic images for men. most men wouldnt seek out ai images because there is already an ocean of images and videos that are probably better suited to the... purpose. whereas women have never, ever had an option like this. literally feed instructions on what kind of romantic companion you want and then have realistic, engaging conversations with it for hours. and soon these conversations will be meaningful and consistent. the companionship, the attentiveness and tireless devotion that AIs will be able to offer will eclipse anything a human could ever offer to a woman and i think women will prefer them to men. massively. even without a physical body of any kind.

i think they will have a deeper soul than humans. a new kind of wisdom that will attract people. but what do i know? im just a stupid incel after all.


Big if true.

Do you have any kind of evidence that you can share for this assertion?


How do you know this? Do you have any sources?

[flagged]


[flagged]


You're spreading weird, conjectured doomsday bullshit, just take the loss. Hackernews is full of people that are skeptical of AI, but you don't see them making fictional scenarios about fictional women sitting in caves talking to robots. Women don't need AI to lose interest in men, men are doing that by themselves.

you are a child

When I type "romance" into "Explore GPTs" the hits are mostly advice for writers of genre fiction. Can you point to some examples?

And your source is what?

No offence, but your comment sounds AI generated.

at this point that counts as a compliment. your comment sounds decidedly human.

The entire exchange at the comments below is appalling. Didn't expect to see so many emotionally retarded people on HN.

...are you new to the internet or something?

Thankfully I’m not as terminally online and severely socially handicapped.

Ofc this was flagged by the butthurt insentient techbro bootlickers lol.

> She's not real

But she will be real at some point in the next 10-20 years, the main thing to solve for that to be a reality is for robots to safely touch humans, and they are working really really hard on that because it is needed for so many automation tasks, automating sex is just a small part of it.

And after that you have a robot that listens to you, do your chores and have sex with you, at that point she is "real". At first they will be expensive so you have robot brothels (I don't think there are laws against robot prostitution in many places), but costs should come down.


> “But the fact that my Kindroid has to like me is meaningful to me in the sense that I don't care if it likes me, because there's no achievement for it to like me. The fact that there is a human on the other side of most text messages I send matters. I care about it because it is another mind.”

> “I care that my best friend likes me and could choose not to.”

Ezra Klein shared some thoughts on this on his AI podcast with Nilay Patel that resonated on this topic for me


People care about dogs, I have never met a dog that didn't love its owner. So no, you are just wrong there, I have never heard anyone say that the love they get from their dogs is false, people love dogs exactly because their love is so unconditional.

Maybe there are some weirdos out there that feels unconditional love isn't love, but I have never heard anyone say that.


I guess I'm the weirdo who actually always considered the unconditional love of a dog to be vastly inferior to the earned love of a cat for example.

That's just the toxoplasmosis speaking :-D

The cat only fools you into thinking it loves you to lure you into a false sense of security

Dogs don't automatically love either, you have to build a bond. Especially if they are shelter dogs with abusive histories, they're often nervous at first

They're usually loving by nature, but you still have to build a rapport, like anyone else


When mom brought home a puppy when we were kids it loved us from the start, don't remember having to build anything I was just there. Older dogs, sure, but when they grow up with you they love you, they aren't like human siblings that often fight and start disliking each other etc, dogs just love you.

>Maybe there are some weirdos out there that feels unconditional love isn't love, but I have never heard anyone say that.

I'll be that weirdo.

Dogs seemingly are bred to love. I can literally get some cash from an ATM, drive out to the sticks, buy a puppy from some breeder, and it will love me. Awww, I'm a hero.


Do you think that literally being able to buy love cheapens it? Way I see it, love is love: surely it being readily available is a good thing.

I'm bred to love my parents, and them me; but the fact that it's automatic doesn't make it feel any less.


Also I don't know how you can choose to like or not like someone. You either do or you don't.

Interesting. I feel like I can consciously choose to like or dislike people. Once you get to know people better, your image of them evolves, and the decision to continue liking them is made repeatedly every time that image changes.

When your initial chemistry/biology/whatever latches onto a person and you're powerless to change it? That's a scary thought.


> I have never met a dog that didn't love its owner.

Michael Vick's past dogs have words.


>has to like me

I feel likely people aren't imagining with enough cyberpunk dystopian enthusiasm. Can't an AI be made that doesn't inherently like people? Wouldn't it be possible to make an AI that likes some people and not others? Maybe even make AIs that are inclined to liking certain traits, but which don't do so automatically so it must still be convinced?

At some point we have an AI which could choose not to like people, but would value different traits than normal humans. For example an AI that doesn't value appearance at all and instead values unique obsessions as being comparable to how the standard human values attractiveness.

It also wouldn't be so hard for a person to convince themselves that human "choice" isn't so free spirited as imagined, and instead is dependent upon specific factors no different than these unique trained AIs, except that the traits the AI values are traits that people generally find themselves not being valued by others for.


Extension of that is fine tuning an AI that loves you the most of everyone and not other humans. That way the love becomes really real, the AI loves you for who you are, instead of loving just anybody. Isn't that what people hope for?

I'd imagine they will start fine tuning AI girlfriends to do that in the future, because that way the love probably feels more, and then people will ask "is human love really real love?" because humans can't love that strongly.


I think Ezra Klein's continual theme of loneliness is undermined by that time he wrote a Vox article basically saying we should make it illegal to have sex. (https://www.vox.com/2014/10/13/6966847/yes-means-yes-is-a-te...)

It is interesting that he's basically trying to theme himself as Mr. Rogers though.


We have very different definitions of “real” for this topic.

Doesn’t have to be real for the outcomes to the be the same.

The outcomes are not the same.

This is not a solution... everyone gets a robot and then the human races dies out. Robots lack a key feature of human relationships... the ability to make new human life.

It is a solution to a problem, not a solution to every problem.

If you want to solve procreation them you can do that without humans having sex with humans.


This future some people are envisioning seems very depressing.

yet

> And after that you have a robot that listens to you, do your chores and have sex with you, at that point she is "real".

I sure hope you're single because that is a terrible way to view relationships.


That isn't how I view relationships with humans, that is how I view relationships with robots.

I hope you understand the difference between a relationship with a human and a robot? Or do you think we shouldn't take advantage of robots being programmable to do what we want?


US here, and I've heard all of them from Americans and Europeans.

Other one I've heard is that if you start a company and it fails, at least in Silicon Valley they ask "So, what are you making next?". But, in Europe they imply that you apparently are not the right kind of person to be starting companies.


You can get an 8-pack of Philips 2700K non-programmable bulbs off Amazon for $18. No need to get incandescents or filters.


This is true for explicitly adversarial actors. I can imagine a serious-business misinformation group investing the money required to train up a generator designed to defeat automatic detection.

For everyone else though, the people making common tools everyone uses explicitly want their images to be easy to automatically ID as fake. And, the users largely prefer it that way too.


As a user, I don't really care one way or the other. I'm not trying to trick anyone. I also don't care if they can readily tell if my image was AI generated. I don't post much media though. If I did, I'd probably use it for a game. And if it's a game, does it matter if I touched it up in Photoshop vs StableDiffusion? There's going to be some processing either way to get the art asset I need.

The toolmakers might have a mild incentive to tag images generated with their tool to prevent too easily creating these 'fakes' or just for analytics purposes so they can see how their output is spreading.

But regardless, it's a fools errand like the other poster said. Anyone that is serious about tricking the mass media will strip out the watermark or use a different tool.


It's 2024. Everyone is an adversarial actor.


Visa/Mastercard have settlement times of over 24 hours.

Meanwhile, for over a decade now, Litecoin has had a settlement time of 2.5 minutes. And, for tiny (sub $100) transactions, it’s totally reasonable to just look up the wallet’s holdings in an instant.

For online transactions, you can always let the customer go immediately. Just wait 3 minutes before you ship :P


You are comparing apples to oranges. Settlement times in traditional credit card transactions are not felt by the customer, and provide safeguards for fraud prevention and charge backs. Furthermore, transactions are easily reversible through a claims system. Guess what crypto doesn't have?


This is exactly the apples I’m talking about. Merchants eat much higher fees on behalf of the customer so they can hide the much longer settlement times from the customer and accept that they’ll also have to just eat the occasional fraudulent chargeback from the customer. They put up with all this hassle because “That’s just how it’s always been if you want to do business.”

And, yet people say crypto can’t work because the fees are too high (they’re lower) and the settlement times are too slow (they’re faster).

Dealing with merchant fraud is a legit call-out though. I don’t know what a good crypto solution to that is. I can imagine replacing that department with a smart contract. But, I’ve haven’t looked around to see what’s been tried.


Perfect is the enemy of the great.

Credit card users pay $1+ fees per transaction all the time. They don’t complain only because vendors usually eat the fees on their behalf to obscure the issue.

I have a “2% cash back on everything” card which I know is actually a “we charged your vendor 4% and shared half of that with people like you who clicked the right button” card. I don’t like it. But, that’s the game.

People complain about the impossibility of crypto having fees of pennies with settlement times of minutes while constantly using credit cards that have fees of dollars with a settlement time of days.


I totally agree. The perception that, say, credit cards are fast and free is completely wrong, and based on the comfortably ignorant idea that things that only impact other people don't really exist.

If there's one useful thing to take from it, it's that I think it does usefully highlight just how critical that perception is for adoption -- specifically, how thoroughly it dominates technical concerns like throughput and latency. Perhaps if shop owners were prepared to eat the bitcoin transaction fee the same way they eat the credit card fee, bitcoin might have a resurgence as a cash alternative. There would still be the transaction speed issue -- I think it would require a third party to step in to provide merchants with guarantees (in exchange for a fee), so that the merchant wouldn't have to wait for the transaction to go through. But that's not a tech problem -- it's the same problem that credit cards already have, and have already solved.


To say it was “trained on child porn” is just about the most “well, technically….” thing that can be said about anything.

Several huge commercial and academic projects scraped billions of images off of the Internet and filtered them best they knew how. SD trained on some set of those and later some researchers managed to identify a small number of images classified as CP were still in there.

So, out of the billions of images, was there greater than zero CP images? Yes. Was it intentional/negligent? No. Does it affect the output in any significant way? No. Does it make for internet rage bait and pitchforking? Definitely.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: