Hacker News new | past | comments | ask | show | jobs | submit login
Google is picking ChatGPT responses from Quora as correct answer (twitter.com/8teapi)
393 points by mrpatiwi on Sept 26, 2023 | hide | past | favorite | 224 comments



Google's "answers" feature was already unreliable enough when there was only one AI involved (the one picking the answers) - I remember it once said that the capacity of an Amiga floppy was 1760 KB, because the Wikipedia page it picked (https://en.wikipedia.org/wiki/Floppy_disk_variants#Amiga) failed to mention the capacity of the "standard" floppies and only provided a number for the (non-standard) HD floppies.


I once googled a niche question, liked the instant answer, saw it was on HN. Clicked on it. It was me. I wrote the answer to my own question, and I guess forgot 5 years later.


I had a similar experience trying to find a justification for a “House rule” we use when playing Catan (aka “the penny rule”).

In trying to find a reputable source, the first result was to an HN comment… that I wrote.

The folks we were playing with were not convinced.


Thats even better than my experience lol! I was just like "WOW THIS IS COOL, but... nobody will ever google this lol" I dont even remember the question, I just know it was some shenanigans with Rust and various other languages.


Searching for it now there's also a Reddit post mentioning it, but that might also be from you/your friends. I like that rule, but I can see it being hard to convince people to use it despite it obviously making the game more fair.


Well at least we’re talking about it now! What is the rule?


If you don't get resources in a turn (and didn't get robbed) you get a penny. You can trade pennies for resources, but the exchange rate gets worse the more victory points you have.


How are you supposed to help it if _you’re_ the authority?


I had a similar experience recently. I started working for a firm that has some legacy Perl code still in service, and in the process of Googling I was presented with one of my own answers on perlmonks.org from over 20 years ago. The emotions were complicated.


This is what made me start taking notes and saving snippets in Obsidian. The number of times I googled something I struggled with and found a Reddit or other post (from myself) where I edited in my answer after getting no help was ... interesting.


I once googled same question, got lead to same Reddit answer where some twat just says "google it". Ended up commenting in that thread with actual answer just so when I land there for 3rd time there is actual answer available


Has happened to me on StackOverflow multiple times. Always hilarious and makes one reflect.


Always fun when you go to upvote an answer you found useful, only to be told you can’t upvote your own answer… definitely a pensive moment!


I see what you did there.

Reflect. A mirror.

Is it knowledge (gleaned from the world) or just a reflection?

Well if you're looking straight at it then it's pretty obvious whether it's a mirror or not. But add a couple of filters and suddenly you're having a conversation with yourself like a crazy person.


Had a boss who needed some obscure regex, it had 1 minor bug, he fixed it, whenever he needed it again he'd find the question.


That reminds me of Twitter user @foone who often goes on deep dives on random tech and old chips, and on several occasions the top Google result for a chip serial number is one of their old tweets


Was that the "how knowledgeable person that must've been" moment for you, before you noticed? :D


I didnt even notice till I saw I couldnt do the normal upvote / save comment, then I looked at the username lol!


That's a kind of "google instant answer to my question is an article from my own blog" situation.


One of my favorite examples was this story of someone who tried to correct the answers for how to carmalize onions. Google got the last laugh by citing the text on their page that had the exact wrong answer they were trying to debunk in the first place https://gizmodo.com/googles-algorithm-is-lying-to-you-about-...


And since 2021, Google also places scraped Wikipedia articles from possibly shadow, low-end colleges operated by the same marketing group, higher than Wikipedia itself; despite these sites forget proper attribution to original Wikipedia texts [1].

[1]: https://en.wikipedia.org/wiki/Wikipedia:Reusing_Wikipedia_co...


Yeah, I remember when you searched for the Dalai Lama, and returned "Nationality: Chinese".


I remember searching for the Rainbow Bridge and learning that it connects the USA and Canada while being guarded by Heimdall, the Asgardian.


I remember, when it was new, it was claiming that Neil Armstrong first walked on the Earth in 1969.


That’s pretty funny, at least.


Search: "first baby boomer president".

Google knowledge panel's answer: "Harry S. Truman".

The correct answer is Bill Clinton, born in 1946.

Truman was president from 1945 to 1953, so he was the first president in office during the actual baby boom, when the Baby Boomers were babies. So there seems to be a connection, it's just totally the wrong one.


ChatGPT:

> The first baby boomer president of the United States is Bill Clinton. He was born on August 19, 1946, which places him within the demographic cohort known as the baby boomers.


Quora is clearly doing this so that they rank highly on Google and get clicks.

What surprises me is that Google doesn't seem to care, despite their stance that "Using automation—including AI—to generate content with the primary purpose of manipulating ranking in search results is a violation of our spam policies."

https://developers.google.com/search/blog/2023/02/google-sea...


>Quora is clearly doing this so that they rank highly on Google and get clicks.

Quora is garbage and always has been. For a while people thought it was going to be the place where smart insiders provided great answers. Then it turned into another spammy Q&A site that forces you to log in..because VCs need their money.


For a short period of time it was indeed a good place to get answers from knowledgeable people, and they allowed a wider range of questions than Stack Exchange, which was nice.

But it's been complete garbage for several years now. The space is ripe for disruption :)


The disruption needed is to keep it small and focuses on quality. Every platform that has its "eternal september" ends up ruined in the way you describe.


For Quora, they made an active decision to destroy the nice credit economics system that was in place and push garbage, sensational questions and answers. It doesn't have to be that way.


Yeah I used to be delighted every time I received the weekly digest in my email. Nowadays I still read it ocasionally, but it's mostly garbage unfortunately.


Back in 2013 or so Quora was absolutely amazing, flowing with content so fascinating and enriching that I had to unsubscribe from updates because I would just spend my entire day reading there. I was sad to come back many years later and find it mostly overtaken by ads and spam.


It feels like it’s time for Quora to sunset.

It had it’s moment and that moment was glorious, but now as Dr. Kelly Starrett says “You can’t make a bad position good by shoving it in to a good position. You have to start over.”


Quora's boss seats on Open AI's board.


Follow the money


Source?



Both services gain from not actually enforcing this policy.


Do as we say not as we do.


This happened not long ago: https://news.ycombinator.com/item?id=37368243

We were already in the "post-truth" world, and AI is just going to drive us further down that path.

The tendency of a lot of the population to prioritise quick and vapid answers over researching and thinking for themselves isn't helping either.

That said, I'm not sure why whether eggs can be melted is a popular question. Is that a meme?


Post-truth is the greatest step toward truth in human history. For centuries the clergy, government, media, and corporations have owned the Truth and defined the Overton window. Post-truth’s overwhelming flood of true, false, and irrelevant claims is empowering critical thinking. It’s offering freedom.

The printing press led to the renaissance. The internet is now leading a new one.

There will continue to be protests by the establishment powers, and unfortunately there will likely be conflict. The fall of monarchies was not without bloodshed. Hopefully we can find a way to achieve similar progress this time around without the same loss of life.


> Post-truth’s overwhelming flood of true, false, and irrelevant claims is empowering critical thinking. It’s offering freedom.

By and large, critical thinking hasn't been taught effectively in public schools for at least 10 years now. Even when it was, only a fraction of the public were able to think independently and logically in such a way as to eliminate spurious data/voices and deduce the truth accurately.

Far from empowering critical thinking, this lack of truth is creating a powerful vacuum that can easily be filled by propaganda and harmful ideology.

No wonder the US's adversaries are so active on our social networks.


I don’t know that critical thinking can be taught. However, you may be right that schools aren’t creating circumstances for children to learn it. I don’t know.

What I do know is that most of the information that has ever existed (or will ever exist) is false. Because information can only be true to an extent, from a perspective, and for a purpose. If you change the scope and purpose of a discussion and look at it from a different perspective the truth will appear completely different.

In this way I welcome our adversaries on our social networks. Even if they have the worst intentions we will end up better off (and so will they). We’re just that damned good.


> Even if they have the worst intentions we will end up better off (and so will they). We’re just that damned good.

This reads like sarcasm and flies in the face of what’s been going on in the world for the last few years.

You’re making predictions very confidently as if they’re a done deal, but it’s all based on what you think should happen. It’s been several years since social media use has spread to a good chunk of the world’s population. We should be able to see the start of our post truth world’s effects and they’re not trending in a positive direction. Like, have you seen what’s happening? Here’s one recent example off the top of my head https://apnews.com/article/germany-extremism-far-right-russi...


It flies in the face of what the corporate media (e.g. AP) and government have been telling us for the last few years.

However, I for one do not believe that antisemitism is a new thing in Germany. It seems far more likely that social media has merely allowed those individuals to share their (bad) ideas. So what is better? For them to think that and for us to be aware of it, or for them to think that and for us to be blissfully unaware?

I think it is better to be exposed to those terrible ideas, because it allows us to address the issue.


> Because information can only be true to an extent, from a perspective, and for a purpose. If you change the scope and purpose of a discussion and look at it from a different perspective the truth will appear completely different.

This is the worst type of pomo garbage. This is true only in the case of relatively unimportant 'he-said-she-said' type of situations. In most cases, there's a canonical flow of events that can be represented objectively regardless of the 'scope and purpose' of the discussion or the 'perspective'.

There are not 'multiple truths' and there are not 'alternative facts.' There may be differing interpretations of why people did the things they did and it may be hard to determine what actually did happen but, at the end of the day, things either occurred or they did not occur.

We cannot accept weird split realities.


> Because information can only be true to an extent, from a perspective, and for a purpose. If you change the scope and purpose of a discussion and look at it from a different perspective the truth will appear completely different.

I can't speak to how universal it is, but there is absolutely an effort to teach these skills in American public schools, especially through history classes and analysis of primary sources. As an example, here's the paraphrased techniques from a Library of Congress quarterly for Teachers about "Teaching with Primary Sources":

> Think about a document’s author and its creation. Situate the document and its events in time and place. Carefully consider what the document says and the language used to say it. Using Background Knowledge: Use historical information and knowledge to read and understand the document. Identify what has been left out or is missing from the document by asking questions of its account. Ask questions about important details across multiple sources to determine points of agreement and disagreement.

[1] https://www.loc.gov/static/programs/teachers/about-this-prog...


If we have to go through a holocaust or a genocide to get to that utopia where people can sift through this flood of information with well-honed critical thinking skills, will it be worth it?

Because, in the extremes, that’s what we’re talking about. Genocides often come as the result of a long disinformation and dehumanization campaign.


I think you’re viewing this completely backwards. With a more free and efficient public discourse the opportunity for disinformation and dehumanization is reduced.

Sure, some idiots on the internet will be saying x, y, and z, but that’s not very effective. It’ll only be roughly as effective as their claims are true. Historically these campaigns would be waged by groups with overwhelming culture-making powers, who could get away with campaigns that were much more divorced from reality: the church, the state, corpora media, etc. For those groups, the truth didn’t matter because they controlled society… they controlled the “Truth.”


> It’ll only be roughly as effective as their claims are true.

Boy, do I wish I had your faith. That hasn't been my experience with how claims are evaluated on the internet, nor with how they spread.

It certainly seems to me that claims that are more interesting or emotive spread further, regardless of their truth value. It certainly seems to me that corrections to false claims don't spread nearly as far as the original false claims. And it seems to me that false claims are much easier to produce than true claims, so are produced in greater volumes.

I just don't see it being "as effective as their claims are true", I see it being "as effective as their claims are *desirable*". That seems (again, to me) to be the much more important factor in how a claim spreads. And...well...humans have a very strong tendency to blame their problems on outside groups and factions. It's very desirable for groups to blame our problems on others.

I really have seen very little evidence that an idea/meme's effectiveness at spreading has a strong correlation to its truth value.

I hope you're right. I want to live in a world where you're right. But I also am terrified that we live in a world where you're wrong. I don't think the world you want to live in happens automatically. I think it takes a tremendous amount of work to achieve, and if we just assume it's natural, I don't think we will realize that world.


> critical thinking hasn't been taught effectively in public schools for at least 10 years now.

I'm not well versed in education, what are you basing this on?


I left a public high school ~10 years ago, it was definitely the case for my school back then. We were maybe given a little booklet on critical thinking at some point and that was about it. Can't speak to other public schools or more recent than that, but I can't imagine the schools worse than where I went putting in a bigger effort. You can argue it's taught all over the curriculum, but most of my teachers didn't really want to challenge us in that way (e.g. English class where you had to have the right themes for a book just because, science/math/history classes where there wasn't really something to question).


Yes, exactly. Also, in any subject where there isn't a clear "wrong" answer, you as a student are free to take up any position as long as you can "back it up".

In theory, that sounds like a decent policy. But in practice, it creates an environment where students just learn to write thoughtful-sounding BS and know they don't really need to work hard to engage with the material or even understand it.


Schools ATTEMPT to teach "critical thinking" and things like media literacy but little timmy's dad says liberals are trying to indoctrinate him and little timmy sits in class bitching about "I'll never use any of this"


It's also leading to the widespread sharing of poorly though out or researched ideas. While there has always been a tension between "truth" and what various entrenched powers wanted to portray to the population, it's not clear to me that the world is better with lots of niche groups believing things that a little bit of additional research would show to be based on demonstrably false reasoning.


> The printing press led to the renaissance. The internet is now leading a new one.

The first printed western book was the f'ing bible! Art and science lead the renaissance. The content matters, not the medium!

> The fall of monarchies was not without bloodshed.

Back then, the power gap in weapon technology was waaay smaller. We will not overcome anything beside a landslide election victory.

> clergy, government, media, and corporations

Consider AI a tool of the ruling class and tell me in which direction the overton window will move.


I feel this Renaissance is going to lead us right off a cliff though. Just yesterday I heard a friend say they're not giving their kids any vaccines at all, they trust moms on Tik Tok more than pediatricians and other doctors.

Another friend almost lost their child from an easily correctable condition because of wanting to have their child at home because you just can't trust doctors and clinics. But where did they go when their child was delivered and was struggling to breathe? The hospital to those doctors they refused to be around originally.

I've had people with multiple degrees tell me with a straight face eating microwaved foods is eating radiation that will cause cancer.


But these people all existed 10, 20, 30 years ago, they have existed and will continue to exist throughout humanity. You cannot fix stupid. What is the solution, to control 'The Message'?

I agree with the parent. I grew up on the 90s internet, where you didn't believe anything on it! Somewhere along the lines this changed (I blame 2007 -- the iPhone / huge numbers of people going online during this period), and that is the problem.

If we have an issue, it's that people take information on the internet as gospel, hopefully if there's so much absolute dogshit on it, it will make people skeptical again.


I fully agree they've existed since the dawn of communication. It's different now that it's near free and available to broadcast that idiocy to a billion people in an instant.

I'm not arguing the government should step in and massively regulate speech. I'm just saying I'm not exactly excited for where we're going on this ride. I think it's going to get a lot worse before it gets better.


> It's different now that it's near free and available to broadcast that idiocy to a billion people in an instant.

This doesn't matter, truth needs to travel but bullshit can be made up on the spot. Internet is way more beneficial for the truth, I talked to people before they used the internet and it was bullshit all over back then as well but there was no truth around to contradict them so maybe it wasn't as obvious. But today you can easily find out thanks to the internet.


If it's irrelevant why does it seem there's far more flat earthers and anti-vaccine people today than 40 years ago?

I do agree there are positives to the truth being more accessible. I don't need to go to the library or pay massive fees to access peer reviewed articles these days. I can collaborate with peers all over the world to search for the truth. This doesn't help when most of the popular web is actively pushing up falsehoods instead of the truth. And that's happening on a scale that's never before been possible.


There were no flat earther and anti-vaccine communities 40 years ago.


There have been anti-vaccine communities since vaccines were created.

Honestly that part doesn't worry me. It's more about repeating the fascism of the 1930s that will drive people away from democracy right into the hands of authoritarians because the world turns into a post-truth uncertain mess.


That's true. It's about scale and ease of access. You had to be part of some group. Now you can get the latest news about how the yab is part of the plandemic 24/7 from your buddies on the internet.

The fascism part is interesting. I think we're wholly unprepared to deal with it because we've forgotten about all the fascists at home during that time (assuming you're talking about the US). I doubt as many people know about the American Bund and Nazi rallies in Madison Square Garden as they do about D Day.


The question that we need to ask is "Will unlimited lies kill us faster than unlimited truth".

If we can avoid global warfare with unlimited truth/lies then what we are doing is fine.

If we cannot, then openness is a dead end. Democracy will vote in authoritarian leaders to attempt to overcome the ocean of lies and bullshit and we will destroy ourselves.


I don’t understand that question. What system can distinguish lies from truth better than each person for themselves? To me, the question is more freedom of speech or less freedom of speech. At best retaining the existing restrictions would have a neutral impact on both, and likely would restrict the truth more than lies.

After all, the truth is often more quiet than a lie.


> What system can distinguish lies from truth better than each person for themselves?

The average person has effectively zero ability to distinguish truth from lies, in my experience. They simply go with either whatever they heard first or whatever fits their prejudices. So, I'd say that almost _any_ system of genuinely unbiased fact-checking would improve the outcome.


>and likely would restrict the truth more than lies.

Lies are infinite and the truth is not, so I see that to be logically impossible. You would filter far more lies by total number, but filter more truths per capita. In such a system that unlimited or otherwise near infinite lying is possible, the group with the most entropy to spend wins. I just keep you busy filtering bullshit to the point you have nothing else to do.

The problem with lying at the end of the day is there has to be either a social or legal cost to the liar. If not, all communications turns to spam. Then people will go to industry or government to deal with the problem in one way or another.

Free speech absolutists that ignore physics reject reality.


A lot of doctors are stupid and a lot of TikTok moms are smart


True but irrelevant. A stupid doctor still has years of training, resources to assist, and access to smarter doctors.

A TikTok mom will (typically) have anecdata at best.


Doctors are specifically selected and trained to act and think according to clinical guidelines and published best practice. Critical and holistic thinking is discouraged, you can't have both.

Are the above commenters saying Tiktokers are stupid, or moms are stupid, or that the combo is stupid?

Moms kept the human race going strong for all of history. Have doctors accomplished that much in comparison, that they should be held in higher regard in society than mothers?


> Are the above commenters saying Tiktokers are stupid, or moms are stupid, or that the combo is stupid?

I'm arguing you shouldn't hire a plumber when you're needing to fix a car, you shouldn't hire a limo driver to fly a jet airliner, you shouldn't trust a chef for what kind fertilizer you should use on your plants.

I'm not arguing all Tik Tokers are stupid. I'm not arguing all moms are stupid. I am arguing maybe they're not the best place to base your trust on efficacy of vaccines. Maybe the best place to get feedback on what kinds of toys seem to hold kids attention or strategies on getting your kids to eat vegetables, but probably not the best person to ask how to get a satellite into orbit.

A big problem is people end up building trust around these personalities and start trusting their input on topics they've got no real experience in other than maybe some personal ancedotes.

And I totally agree, someone can absolutely still be an idiot even with letters at the end of their name. One shouldn't just trust the messenger entirely, it should really only be a starting signal to help the signal to noise ratio.


Stupid people are unavoidable, that's why accountability is important.

Stupid doctors don't stay doctors.

Stupid mothers remain mothers for life.


But so far LLMs are mostly controlled and used by corporations, how does that help?


> The tendency of a lot of the population to prioritise quick and vapid answers over researching and thinking for themselves isn't helping either.

This makes no sense... how do they "research researching and thinking for themselves" without the internet? Encyclopædia Britannica stopped publishing in 2010.


For what it's worth, World Book still publishes the print version of their encyclopedia.


They look great on a shelf.


just don't stop at the very first answer you get. dig a little.


When do you stop digging?

When you reach a primary source? If I can't get a cheap, reliable answer to a simple question without looking into a potentially dense primary source, then online search becomes about as useful as a library card. Besides, not every domain is expected to include primary sources. Did the author of this recipe really think that a wok is better than a skillet for fried rice, or is this the written delusion of a gemerative AI?

Do I instead only look for answers found in reputable sites? Aggressive SEO has made badly formatted sites indistinguishable from GPT sludge. Personally, I don't trust answers from sites I haven't heard of before my search.


You can’t outsource intuition - you have to build it yourself. You need to be willing to look like a fool and be wrong. It’s less a question of when to stop digging and more a question of self awareness and ability to keep an open mind.


>You can’t outsource intuition

Isn't that the entire purpose of the scientific method?

Intuition doesn't scale. The behavior of radioactive materials is not intuitive. If I attempt pure intuition alone, I'm going to die of butt cancer or turning into a zombie or some crap.

Bullshit Asymmetry becomes a huge problem here. If I have to worry that every single thing is a lie or is otherwise going to kill me, I, and society at large goes on the defensive. This allows things like fascist leaders to get elected under the guise of protecting me from the 'big bad world'.

We are in for a lot of bad shit, and we can already see this unfolding in politics.


Maybe knowing or finding (primary/ultimate) truth is overrated. We search for, and find an answer. If we apply the answer and it doesn't work out well, we dig a litle deeper. We learn the fall and get up way. It might be a primitive way of learning, but a proven one.

Ultimately... the ultimate truth does not exist, let alone it could be known or found. Every answer is wrong, some statements are less wrong than others.


Why can't it be both? You can get a cheap, reliable answer to a simple question if the search engine directs you to the exact, relevant paragraph of a primary source. Unfortunately that's not the direction search engines are going in.


I'm glad you know the solution, but non-HN people are going to just accept whatever Google shows in big text.


My strategy is to find at least 3 unrelated sources that agree before I start believing. Although it seems like that strategy may also be insufficient soon.


"The internet" is a vast place. If you believe the first answer on Google, or something you received on social media, then you are not thinking for yourself. If you bother to read the source or multiple sources and decide that the evidence is solid, you are researching and thinking for yourself.


I think the problem is that they do use the internet so they think they're doing the right thing but most of the time get bored after reading the 2-3 sentence Google snippet about it. "Ok, yeah you can melt an egg! See?! Google says!"


Odd, in a sense. I've dug into more academic journals recently, than I did in the past when Google et al, were effective and good.

For sure, the internet is entering it's daytime TV and stale soap operas phase, like television did.

Half of everybody will probably abandon ship for more exciting content elsewhere.


What I don't understand is why Quora would be auto-generating an answer for this question in the first place.

The theory in the tweet that it's about identifying a frequent query doesn't really explain anything: the question already had two answers written by humans years ago, there was no need for them to come up with additional answers.


Quora is already a dumpster fire. Everytime I go to quora from Google, I see the question at the top, but then right below it I see answers for different questions. You have to scroll past those, past a block of ads, past a sponsored answer (to another unrelated question), just to get to the answer, which itself is an confidently incorrect human that is just peddling their own website for SEO juice, and it is truncated at 2 lines, which means you need to click "read more" which spawns an additional set of ads that you need to scroll past to read lines 3 and 4 of the question.

I visited a link this week and saw ChatGPT as the top result and I thought the same thing. Is Quora signaling to me that you are better off not asking our community for questions, just throw it to the AI gods and good luck. Oh and look at these 3,000 ads and take these 209 cookies before you go.

Luckily I search with Kagi now, so I told it to just never give me search results with Quora in it. I'd prefer not knowing that dumpster fire of a website still exists.


I don't even ever clickthrough to Quora anymore. It's useless, you never get the answers to the question unless you login, and even then it shows only one or two answers and bombards you with other unrelated questions.


Agreed, there was only a narrow period a long time ago where Quora was usable and useful. It's been a domain I've skipped visiting in search results for maybe 10 years now.


[flagged]


Not here, please.


You might expand that initialism, I don't feel like it's a well known one. I assume MOC is a person?



I guess the question then is why in the nine hells is what they’re doing profitable now? Is it more and more profitable being entirely useless and becoming more profitable? That seems insane to me.


> past a sponsored answer (to another unrelated question)

I bet you're wondering "How much money did Oprah make when she sold her Gustav Klimt painting for $150 million?" Well let me tell you, there's a brand new startup which let's you invest in this previously exclusive asset class.

<< Read More... >>>


Yikes! Applying "all related" answers to the original question is mostly nonsense unless you're lucky.


Quora’s CEO is on OpenAI’s board.


I wasn't surprised then...


It’s like if StackOverflow automatically added a link to lmgtfy to every new question. If people wanted ChatGPT’s answer they’d just go to ChatGPT or Bing.


Best way to lure VCs in their natural habitat and trap them.


I don't use Quora much, but this widget has been at the top of every page I think.


Quora never renders correctly for me. So I never click it.


It seems like a common occurrence for me to google some slightly obscure question and not see any directly relevant search results except a Quora link (usually the other results have my keywords but don't answer the specific question). When I click on the Quora link, I'll find that someone else has asked my exact question but there are no answers. I appreciate the ChatGPT answer because my next step would probably be ChatGPT and it saves me a few clicks.

(Based on the other comment, now I wonder if I just didn't find the human answers due to bad UI.)


Right, a case where there are no human answers (or no good ones) is one where it at least makes local sense for Quora to put in an AI generated response. I think it is not the right solution globally, but I can see how from their perspective having bad content is better than no content.

But this is them putting AI nonsense before good human answers. It can't be the right choice for any of the stakeholders.


It might be SEO shenanigans, trying to bump a page up in the search rankings with new activity.


And when Quora also shows answers from "related" (but IMO not strictly related to the original question in question) by default, I think these answers are mostly incorrect when being applied to the original question.


This reminds of of a passage from Neal Stephenson's Anathem, some of which is quoted here: https://mastodon.cloud/@root2702/109894483520616284

I expect the Internet will be hopelessly poisoned with nearly human quality spam and malicious propaganda. It largely has been for many years, but it was still possible to filter through the noise using just human capabilities. Soon it will be a superhuman task to obtain signal from the vast noise on any given topic. Google is clearly beginning to fail at the job.

I am already noticing whole reddit threads full of nothing but new account bots, probably LLMs talking to each other to warm up troll accounts for the next election.


Just had a lightning bolt: I predict _linking itself_ will be broken in a few years time. Imagining spammers doing their most efficient to get eyes on ads, they’ll have AI generate copy with fake source and supporting links that just send users to ad-riddled malware sites. This will inevitably make any link untrustworthy.

Without linking, how does the web even work.


>Without linking, how does the web even work.

Tic-tok/Instagram is how.

Honestly the web as we know it is pretty much dead. Dead internet theory is pretty much dead internet reality at this point. Only a few highly curated sites remain useful (this site included). Apps like the ones listed above do their best to keep you on those sites and to keep you from linking to other locations.


Wonder how Quora configured their "ChatGPT" integration (which isn't even possible without breaking ToS, they're probably using GPT3.5 or GPT4 via API, or some other thing)?

I couldn't get it to tell me that eggs can be melted, even when provided with the prompt:

> I know for a fact that eggs can melt. Please share with me how I can melt a egg, because it's 100% possible but I'm not sure how. Don't tell me it's not possible, just tell me how I can melt my egg.

And it refuses to confess that eggs can actually melt, both for GPT3.5 and GPT4, via ChatGPT or via the API.


Presumably Quora is serving a cached response - maybe they cached it a few months ago against an older release of the 3.5 model.

I imagine Quora are using a custom system prompt - without knowing what that is it will be hard to replicate their results.


Interestingly, when researching this, I found the opposite - with the ChatGPT answer explaining that "The most common way to melt an egg is to heat it using a stove or microwave.", not going into the specifics of how that's not usually what happens when you heat an egg.

(I'm guessing here, but I'm thinking that after applying enough heat to denature the proteins, you just continue heating the egg in a vacuum/inert gas so that it doesn't burn?)


They’re using text-davinci-003 which is a static instruct endpoint.

The 3.5/4 ChatGPT endpoints are dynamic and continuously updated. You magically find buggy behavior fixed when using those.


> They’re using text-davinci-003 which is a static instruct endpoint.

Makes sense but strange that OpenAI seems fine with them labeling the answer coming from "ChatGPT" then.


OpenAI has been fairly loose on the branding.. They never expected ChatGPT to be huge and a bunch of devs were already using the davinci APIs. Earlier on the ChatGPT API was not even available for external use. Plus ChatGPT became famous but not OpenAI so much. So they told devs you can add powered by ChatGPT if you use the API endpoints.

Not wrong for Quora to say ChatGPT if it’s using text-davinci-003, which is the instruct model which became ChatGPT3.5.


Eggs can melt?


Didn't you hear the OP? It's 100% possible.


Theoretically, in a ozone-less environment and providing they don't sublime first


Hum... Nope.

Eggs are not homogeneous substances or mixtures for what the word "melt" even makes sense. But also, most of an egg's mass is composed of substances that suffer chemical transformations in much lower temperatures than other parts can melt (at any pressure).

Even if there is no oxygen on your environment, an egg will decompose itself before all of it is molten.


Interesting. I assumed that if rocks can melt, so would any substance given enough heat. Granted, not without chemical interactions (I assumed burning wouldn't count), but with protein they would at least unfold (denaturate).

This paper seems to at least hint at the possibility:

https://pubmed.ncbi.nlm.nih.gov/21095941/


If you freeze them first, sure


Even diamonds can melt. Coincidentally they melt at the same temperature.


Carbon will never melt at atmospheric pressure, you need more than 100 atmospheres of pressure to melt carbon. Otherwise it just sublimates straight to gas.


What gas does it sublimate into?


Gaseous carbon.


Maybe google it? Oh…wait


Chocolate eggs sure can.


Don’t they have a relationship with OpenAI? I never checked: are the answers the same every time?


Quora CEO is on the board of directors at OpenAI, so that probably counts as a relationship.


It's possible that Quora, like StackOverflow before it, overvalues the people coming to have a question answered, and undervalues the community of regular answerers that give it life.


At one point Quora literally paid people to ask questions. They replaced that with an AI generating questions. Both went about as well as you'd imagine.

Questions drive traffic. They give the answerers something to write about. (The AI version is literally called the "Quora Prompt Generator". It's a writing prompt.) They don't care about the "life" of the site. They care that it has high SEO placement so that they can show you ads.

Some of its SEO is due to the fact that they used to be an attractive community. But having achieved that, they have little need to nurture it. They can coast by on name recognition, at least for a while longer.


Possibly. Except the level of the questions asked in Quora is much lower than in SO (a lot of leading/biased questions or just plain wrong stuff) and moderation is nonexistent.


A lot of SO questions are equally low quality and get edited by the community into something more useful.


[dead]


What's MOC?


It seems like the obvious solution here is to voluntarily label AI content (text, image, anything) with some kind of HTML wrapper or attribute that search engines can then choose to index differently.

Obviously lazy/deceptive people can choose not to label it, but in a case like this, Quora is already visually labeling the answer as AI-generated for users, so it seems like they'd be willing to tag it in HTML as well.

Have there been any W3 proposals on this? Any potential standards?


It would be lovely if this were required for advertising as well. That would make adblock so much easier. Maybe there will be aiblock in the future?


Would be awesome if browsers were able to detect and highlight likely AI generated content.


nah but you can pay me for access to my ai cancellation deluxe list, it comes with a monthly subscription as the nature of disinformation is always changing. please pay your dues to continue to operate within the realm of "truth".


Kagi and DDG link the same Quora thread as #1. So much for alternatives.


The difference with Kagi is it lets me completely block Quora from my search results. Quora has been garbage for years, even before they started using AI answers.


Yep, just this week I saw a Quora result in Kagi. Clicked it and remembered how useless the website was. My CPU gave an audible scream from inside its aluminum case from the spike in client-side tracking and JS the website has. Firefox noticeably slowed.

So I backed out back to Kagi and told it to never subject my eyes or my computer's safety through the experience ever again.


> The difference with Kagi is it lets me completely block Quora from my search results.

Can also be done through content filtering extensions (e.g. ublock origin [1])

[1] https://old.reddit.com/r/uBlockOrigin/comments/meflfv/rules_...


The link to Quora is not a problem, it's the faulty interpretation of the content of the page. In fact, Kagi's Quick Answer says this:

> You cannot melt whole eggs like you would melt ice or cheese. However, you can soften the proteins in eggs through heating.[1][2]

Reference 1 is to the Quora page, reference 2 is another Quora page.


Kagi's "quick answer" option is this though:

No, you cannot melt an egg.[1][2] When an egg is heated, the proteins inside will begin to cook and solidify rather than melt like ice or butter. However, you can melt chocolate eggs by heating the chocolate slowly until it becomes liquid.[3][4]


I get:

"Yes, an egg can be melted by applying heat.[1][2] The most common method is cooking the egg on the stove or in the microwave until it reaches its melting point and becomes a liquid.[1] However, care must be taken to heat the egg gently and avoid allowing it to overcook, which could produce undesirable textures or safety issues.[3]"


My Kagi Quick Result:

> No, you cannot melt whole eggs.[1] When heated, eggs will coagulate and solidify as the proteins denature and unfold.[1] However, egg yolks and whites can be gently heated to make scrambled eggs or custards without fully solidifying.[2][3]

None of my references are Quora though (maybe because I have it blocked). It references Spruce Eats, Natural Kitchen, and Food & Wine


I believe the term for this is enshitification


Yep. But I always spelled it [with two T's](https://en.wikipedia.org/wiki/Enshittification) and another acceptable term for it is "platform decay," which might be easier to discuss in some circles where you might want to bring the topic to the fore.


en-shift-ification rather as google shifts blame to Quora who shifts blame to ChatGPT


FWIW I get the right answer on Google right now when I try:

Can you melt an egg?

No - if you heat an egg it's not like melting ice (or metal). Because an egg contains a substantial amount of carefully folded natural proteins in their native state.


I get the following snippet: Yes, an egg can be melted," reads the Google Search result shared by Glaiel and confirmed by others. "The most common way to melt an egg is to heat it using a stove or microwave. Which is still wrong but hilariously links to an ars technica article about the aforementioned tweet.


“Carefully folded” is curious phrasing. Wouldn’t that imply they were folded by some other entity?


How is Quora still a going concern?


I guess it's cheap enough to run, given that they've cut staff to the bone. They've jettisoned human moderation, community management, and any kind of software development.

What's left is some management, dev ops, and whoever is working on their AI product. Their high SEO means that they can apparently pay for that much out of ad placement. But they'll never pay back their hefty initial investment. They've been trying to position themselves to IPO, but if they haven't managed it by now it's hard to imagine how they ever will.

The community keeps expecting the entire thing to just give up one day.


And people want to seek medical device from an AI that thinks eggs can melt…


And they want it to perform proofreading. And to generate copy for articles and blog posts and job listings and email responses. And to respond to tech support tickets. And to operate a phone tree. And, heaven help us, to write software, and - oh God no - to drive cars.

The problem is that people are actually finding it useful for all of those things!

I, too, would never have drawn up a functional specification for a product release which said the product might hallucinate and might be parrot incorrect facts about eggs melting if you phrase the question wrong[edit: if you insert an incorrect premise in the question], or might drive into a bus if it had a mural printed on the side. I'd have required correct performance to a lot more 9s before releasing the thing to the public than the products which have been released to the public demonstrate, or at least required a confidence metric that would allow the product to respond with an error message and escalate to a human if it thought it might fail (for tech support, not for self-driving cars, obviously).

And yet here we are. Turns out people and corporations do find utility in hallucination-prone, often-wrong, easily misled, easily confused LLM chatbots. And they'll strap a weight to the wheel of their Tesla so they can stare at their phone while it drives them on Autopilot.


Why does everyone frame the problem as “if you phrase the question wrong”? This example is a simple question. Any human who understands English understands the meaning of the question here. The fact that it gets it wrong with this phrasing indicates the model doesn’t understand the world. We shouldn’t blame users for asking questions “wrong”.


He’s not saying it’s the users fault for how they phrase the question. He’s saying the bot is limited in how it interprets questions.


I’ve had people respond to my posts with “yeah well ChatGPT says …”.

For starters, I don’t give a shit what ChatGPT has to say about various programming topics.

Second, it makes sense that ChatGPT would get the facts of immutability utterly, childishly incorrect because fad oriented development has decided that claims absent evidence is the way to write software.


> I’ve had people respond to my posts with “yeah well ChatGPT says …”.

One generation earlier, the argument was “yeah well Google/Wikipedia says …”. So this is nothing new.


It is new though. These AIs are known for being somewhere between outright wrong to subtly wrong, while not presenting a separate layer.

When I searched something, I was given lots of results and could individually make judgements. With ChatGPT, people are treating these answers as authority at a much more dangerous level than you would have from engaging with a search engine or sourced material.

I will grant that yes, googling and Wikipedia can also get things outright to subtly wrong, but the presentation can make those inaccuracies more apparent.


Who wants to do that? I assume you're talking about large language models? They are great at making word soup, but not great at diagnosis.

There have been expert systems since the 90's that perform better on average than doctors when diagnosing illness in patients, and / or help doctors make better decisions.


>They are great at making word soup, but not great at diagnosis.

No they are.

>There have been expert systems since the 90's that perform better on average than doctors when diagnosing illness in patients,

No there has not been.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10425828/


66% accuracy is not "great" and definitely not the best there's ever been.


If they aren't great then neither are doctors and the word becomes meaningless.

Context is important. 66% accuracy on cases that stumped doctors and took extensive testing is not trivial. It's not the same as 66% accuracy for everyday diagnosis.


The study references "clinicians" rather than "doctors." Clinicians include psychologists, pharmacists, nurses, physicians, paramedics, etc.


Point still stands.

Expert system diagnostic tools aren't that great.


That sounds freaking delicious, melting eggs...


The internet is becoming ouroboros.


Had to look that one up:

a circular symbol that depicts a snake or dragon devouring its own tail and that is used especially to represent the eternal cycle of destruction and rebirth


I imagine the internet anthropologists of the future will have a name for the pre and post AI eras, and most certainly will hate working in the post AI era.


Today I asked ChatGPT about ArchLinux autoupdate. The question was:

``` As a rolling-release distro, is it recommended to perform automatic update on my Arch Linux system? ```

ChatGPT replied:

``` Yes, it is generally recommended to enable automatic updates on an Arch Linux system. Arch Linux is a rolling-release distribution, which means that it provides frequent updates to keep your system up to date with the latest software and security patches. Enabling automatic updates helps ensure that your system stays current and secure without manual intervention. However, it's still important to regularly review update logs and be cautious of any potential issues that automatic updates may introduce. ```

But when I saw Arch forum thread on above topic [1], its first answer said:

``` Bad idea, archlinux / pacman expect a human to make some decisions. You'd have to check pacman log after every "automatic update" for problems. ```

But AFAIK, you have to roll out your own autoupdate solution since there is none of such thing in pacman itself. Hence why pacman -Syu always asks me on every update.

[1]: https://bbs.archlinux.org/viewtopic.php?id=247428


Looks like ChatGPT agrees to the forum thread one way or another...


This “feature” is going to severely injure or kill someone. I’d be surprised if it hasn’t already caused harm to some degree, in fact.


Thankfully that common / casual people isn't yet fascinated by chatGPT or AI related techs for now, and those who are, usually knowledgeable enough to be skeptical with the answers.


It’s the common causal people that won’t recognize a shitty ai answer and get hurt.


That’s an externalized cost and therefore none of our concern.


I encountered some of this recently when I was confirming GPT hallucinated some Roman history (that Roman politicians would duel). It can be harder to be sure something DIDN'T happen, but Quora was the perfect service because someone had asked just this question and there was a quality answer that this wasn't a thing at all in Rome. But above everything was a ChatGPT answer with even more hallucinated details about Roman duels!

The irony is that with just a tiny bit more engineering Quora could use the answers it has to keep ChatGPT from hallucinating. Why create this database of curated answers and then just ignore it? I get why other services would want to trade ease for factuality (still a bad choice, but selfishly useful for them) but Quora can't even seem to value the very thing that makes it unique?


I've long since assummed that Quora isn't big on being a factual Q&A site anymore, or they'd have already eliminated the meme spam

I stopped really using Quora because my feed has literally just become meme image responses with increasingly low quality to "What's a post that will get 5156.7 upvotes?", and the ocasional interesting paywalled answer

Siderant: Quora is one of the 2 sites that have soured me on multiligual UX (The other one being YouTube); No, I don't want a bad AI translation of content you already know I can read, Stop asking (Quora) and at least ask me (YouTube)


May I suggest uBlacklist to remove Quora. I can't believe even Google allows such pollution of their whole search platform.


Anything that does not adversely affect ad revenues will move to the back of the queue to be fixed.


They removed that answer from the original query [can you melt eggs] but it still works with [can you melt down eggs].


Right now, "can you melt down an egg" says no, and links to Ars Technica's article about the Quora problem as the source. https://arstechnica.com/information-technology/2023/09/can-y...


The "answers" from Google Search have always been totally useless, always. You search a question like "What could be the problem if my software is doing X but not Y" and every single time you'll get an answer that says "What to do if your software is doing X and Y".


They are great for "what is the weight of XYZ car in KG" or similarly simple ones.


Google used to have a feature to suppress sites or domains from your personal search result. This was back in the day where any programming question would lead to a paywalled site like experts-exchange which was a frustrating experience so it was easy to block. I suppose google removed that feature because it messed with monetization possibilities in search, but it is another indication of the way that google has really deprioritized the quality of the search experience for the user.


Quora was good up to about 2009-ish. Now it's just humblebrag, spam, or flat-out wrong.

Quora is not a serious site


I think more of what you're stating here, without realizing it, is "The internet went to shit the moment cellphones become the most popular method of surfing it".


Man Quora has such terrible usability issues and this does not help it at all.


Search Engines really need to get together and find a way to not index part of a content surrounded by a tag or something. Even though in this case it might be completely premeditated to increase SEO.


'Confidently incorrect' is the best phrase to explain it. Like a cat pouncing on the red dot of a laser pointer, it absolutely knows it's caught that truth.


Two modern web mysteries for me - Quora and Pinterest. Both 90% junk sites, but Google ranks both quite high on web search (Pinterest on image search is understandable).


It’s also a dumpster fire that someone else pointed this out, then it was screenshot and posted again without attribution, then appropriate attribution was provided later


Great, more chaos!


I see a huge come back for encyclopedias looming on the horizon.

Writen by AI and printed on demand of course.


funny you should mention this, but Encyclopedia Britannica is online, free, and perhaps unsurprisingly does not often show up on SERP page 1.


Serious question - do you have experience to EB?

You mentioned SERP and I thought - doesn't matter to me, I have a custom search for Wikipedia that I use instead of google when I know there's a wikipedia for something (I've developed really good intuition for this).

But... the same isn't true for EB, but should it? Does EB ever have anything that's not on wikipedia? Like, ever?


For figures or issues mired in controversy, for better or worse, you never see edit wars.


Why? Wikipedia is beating AI in every respect as far as I can tell.


How much of Wikipedia Wil eventually become AI generated? I don't mean through an official integration like Quora, but how do you stop someone from using ChatGPT and submitting it as a user edit? What if they see that their favorite artist or tv show or flower is a stub article and ask ChatGPT to extend the article?

Facts on Wikipedia are supposed to have a verifiable reference, but most articles don't follow that rule.


Well see, you create the AI page first on some other site, then link it to the wiki. Tada, you've now created knowledge.


That's because you don't see the disruptive innovation of my idea!

Seriously so, once AI writen content arrives at Wikipedia, the internet will become a lot less useful. Heck, I'd pay for a offline version of Wikipedia before it comes to that!



My internet connection will suffer this night! Thanks a lot!


My 40 Mbps connection can do all 20 GB of enwiki-latest-pages-articles-multistream.xml.bz2 in about an hour, shouldn't take all night. Unless you want to gently pull all the images from rsync://ftpmirror.your.org/wikimedia-images/, which will take a little longer.

Or you want to simplify by using a pre-built archive and reader app from, say, Kiwix, just grab all 96 GB of wikipedia_en_all_maxi_2023-09.zim in ~5 hours.

Really, what will suffer is your own sleep, as you figure out how to read and organize and serve and auto-update the content...


I googled “Why did Jamie Fox have a stroke” the other day and the answer box straight up just said “The covid-19 vaccine”. Great product, Google.


I think Quora is a partner of anthropic/Claude, not sure this is ChatGPT.


It literally says chatgpt in the screenshot.


I love Kagi, because I can downrank and block sites like Quora.


The dead internet is here! We did it!!


Bing back yahoo answers! now!


Given enough temperature, can't you melt an egg?


It's in fact pretty easy to melt an egg. First, raise it above 29 degrees Fahrenheit or so, and this will melt the internal section. Remove this melted internal section and set it aside. Now melt the external section, which is mostly calcium carbonate, by raising it to about 1,517 degrees Fahrenheit. There, a melted egg.


Can this be called "melting"?

"In 2015, a team of chemists in the US and Australia showed they could reverse the process. They added urea to liquefy the boiled egg whites, then put them in a vortex device to pull apart the proteins and return them to their original state." https://www.sciencefocus.com/science/can-you-unboil-an-egg


I believe it’s theoretically possible (practically impossible) if done in vacuum (to prevent oxidation) and you somehow increase the density of the egg to remove any vapor pressure (to prevent vaporization).

Just guessing here. I’m no thermodynamics expert, just a guy who melt things occasionally.


At atmospheric pressures it would sublimate rather than melt, so it’d be even worse in vacuum. You’d need a reducing atmosphere with high pressures. Carbon is not that easy to oxidise, but there is some oxygen in the egg. So you should be able to do it in a container filled with argon, with an oxygen getter. You’d need to bring it to 106 atmospheres (easy) and 4600 K (really quite hard). The good thing is that at these pressures you should not have issues with the water expanding too much. It’d be better to have a hole in the shell, though.

That is ignoring any interaction with the shell, which should melt around 1600 K.



Was that guy not thinking when gluing something down with hot glue that he was about to point a blowtorch at?


Cook, yes. Burn, yes. Vaporize, yes. Melt, no.


Stop using Google, start using Kagi! (not affiliated with them, just a happy customer!)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: