Hacker News new | past | comments | ask | show | jobs | submit login
Bing thinks Mars has 2.5B people based on an AI generated blog wrong answer (imgur.com)
99 points by nstart on Feb 14, 2023 | hide | past | favorite | 88 comments



First there were the journalists, who fully researched their material from first-hand sources.

Then there were the bloggers who, without leaving their armchair, wrote opinion pieces based on whatever the journalists had written.

Now there is AI, which apes the non-experience of the bloggers.

Truly we are now living in a simulation of a simulation.


If the example of journalists is anything to go by, soon both journalists and bloggers will produce content using exclusively AI.


$ echo "write me a blog post from the perspective of a young phd in AI on how fears of AI and machine learning are vastly overblown. include fake but obscure and outrageous examples of when AI was able to save lives and downplay real examples of when AI got it wrong. contrast the promise of AI getting it right with software bugs resulting in fatalities. argue this will allow the average company to hire fewer workers but allow those workers to work longer hours with increased happiness. summarize with a specific dollar prediction of improvement productivity and gross domestic product" | sendmail oped@nytimes.com


Ship it!

""" As a young PhD in AI, I believe that fears about artificial intelligence and machine learning are vastly overblown. The potential for AI to save lives and improve productivity is immense, and while there have been some high-profile examples of AI getting it wrong, these incidents are often blown out of proportion.

In fact, there have been countless examples of AI saving lives that go largely unnoticed. For instance, there was an obscure case in which an AI algorithm correctly diagnosed a rare disease in a patient that had stumped human doctors for years. In another instance, an AI-powered drone was able to locate a lost hiker in the wilderness who would have otherwise perished.

Meanwhile, the few high-profile cases where AI has made mistakes are often overemphasized in the media. For example, when an autonomous vehicle hit and killed a pedestrian, the headlines screamed about the dangers of AI. But what wasn't reported was that human drivers kill thousands of people every day in car accidents. While AI is not perfect, it has the potential to vastly reduce the number of fatalities on our roads.

And when we consider the productivity gains that AI can bring, it becomes clear that fears of AI taking jobs are misplaced. Sure, there may be some roles that can be automated, but in most cases, AI will simply augment human work, allowing us to achieve more in less time. This means that companies will be able to hire fewer workers, but those workers will be able to work longer hours with increased happiness, as they will be able to focus on more fulfilling tasks.

In fact, I predict that the adoption of AI will lead to a significant increase in productivity and gross domestic product. By 2030, I expect that the average company will be able to achieve a 20% increase in productivity thanks to AI, which will translate to a $2 trillion increase in gross domestic product. And this is just the beginning - as AI continues to improve and become more integrated into our daily lives, the benefits will only become more pronounced.

In conclusion, the fears of AI and machine learning are vastly overblown. While there are certainly risks to consider, the potential benefits - from saving lives to boosting productivity - are simply too great to ignore. It's time for us to embrace AI and all the benefits it can bring. """


sigh, already a bug in my script, I should have asked chatgpt to write it

s/sendmail/chatgpt|sendmail/


Should have asked ChatGPT to generate the script code for you.


AI doesn’t have the capacity to think critically.


Over the years I have worked in many universities. Many of the so-called critical thinkers I meet are anything but. Their writings echo pre-approved wisdoms and take a generally template-driven approach to all matters. I only need to read one of their papers to know with fair certainty what opinion they will hold on any given subject.

Thinking about it, this makes them sound very similar to AIs [wry smile emoji].


I have seen enough C-suite loud talkers come into the room and blabber about some bullshit as if they're wise, and the people around them seemingly actually buy into it! They don't see that this person is working off of a template. Futhermore, that gas bag is well aware that if this venture doesn't strike gold, they can go to the next with literally no backlash, and moreover use that fail as an accomplishment via the job title they held.

Luckily, it's funny to me now so it bothers me less when I see it.


> AI doesn’t have the capacity to think critically

Many current "journalists" have this issue too.

Whether it's a feature or a bug depends on whether you agree with their output.


it really does tho

https://arxiv.org/abs/2201.11903

just in a limited capacity


It really doesn't. It's a language model. Literally every webpage on this topic mentions that.

This is why it will happily tell you that a director directed a movie before they were born, and agree with you when you point that out, and continue to insist that he did it. There is no ability to synthesize and apply new information to a line of reasoning, because the chain of thought is an illusion wrought by a statistical language model.

Its purpose is to be semantically correct, not to reason.


show me Reason that is not axiomatically embedded in language.

for anything else, how can you possibly know it's not operating on a (massively) complex statistical model? are you gonna ask it? if you do science on it to discover it's complexity, why is the computationally shorter method to achieve the same results any worse or less authentic?

it's like saying the closed form solution of a summation isn't "really" a summation either

finally, you are letting the word "semantically" do a lot of work for you.


AI in general might yet. But language models are literally the implementations of the Chinese room. So it's not exactly a surprise when it turns out they can't think - that's literally the point of the Chinese room example!


You gotta be more careful around these parts with simulation jokes. There's people who'll take it as gospel.


online media were always hyperreal. baudrillard saw this like 30+ years ago.


Mainstream media in the USA has become so unbearable that I welcome AI even if it is wrong a lot of the time. At least it doesn't have an always-war-first agenda.


If trained on material with an always-war-first agenda why wouldn't it have an always-war-first agenda?


If MSDNC, Fox, CNN etc don't get weighted for being superior, the general consensus may prevail moreso.


It looks like Bing has fixed it. The screenshot I took was my own. Originally seen here https://fedi.cpluspatch.com/objects/6dc3bee4-1ff0-4cae-8994-... . I wish I could have posted the link to this directly but it only shows up properly from a federated instance. It can't be viewed at the link above.

Original shows as follows when viewed within my mastadon timeline: https://imgur.com/a/DxAHOWG


This screenshot does not show an AI generated answer, GPT or otherwise.

It's just a normal Bing infobox that Bing has had for ever. Google has them too. It's quoting a website which it does not realize is fiction.


For the record, this is what Google responds with when you ask it about the population of Mars:

Zero. No humans have yet reached the planet Mars. Zero.


On the other hand Alpha Centauri IV will have population of whopping 3B

https://www.google.com/search?q=population+of+alpha+centauri


I guess the way forward will have to be giving the neutral network access to a curated database of facts as suggested by Stephen Wolfram: https://writings.stephenwolfram.com/2023/01/wolframalpha-as-...

The internet is simply too full of nonsense.


Clearly though, this doesn't get rid of the problem entirely and the question becomes who is doing the curation? Is it wikipedian crowdsourcing for example? Is it a database curated by the NYTimes, or Howard Zim, or Milton Friedman?


Imagine suggesting a literal plotpoint from an Orwell novel with a straight face on hackernews.

Dont forget to update the Wolfram database with who we're at war with!


Who we have always been at war with, comrade!


What is more interesting for me, is why anyone is surprised that an algorithm based on choosing a statistically probably combination of words, trained with whatever random crap appears on the internet, happens to choose some combination that makes grammatical sense, but doesn't match reality.


Because it was not trained with bias in favor of "reality" or facts but grammatical sense. Given that gpt is a statistical model in theory it should be possible to bias towards facts, but that means being able to identify bs from facts. This is a subject on its own.


Do you think it’s even possible? Imagine written facts as a thick thread and bs as long hairs growing from it (both written and hallucinated, there’s no limit for bs). We can ask something very well-known and likely(?) the answer will stay within that thread. But if you ask something non-trivial, i.e. start in the hairy area, which force could drive its “thinking” back into the thread? I believe that there’s no way to create it (in this model). Once far away from the main road, LLMs can wander bs space indefinitely. If my analogy makes sense at all, obviously.


TIL Pluto has ~1B plutoans!

https://imgur.com/a/m6xV5Js


And they're commies!


It is big data all over again. Supposedly with so much data we can know everything. But there was no way of knowing if that data was any good. Garbage in garbage out.


The tech field is the sad story of execs, marketing types, and end users continually relearning the lessons that they refused to accept from the developers.


So I just asked Bing how many people live on the sun, and apparently 50 trillion people live there.


well its definitely big enough to fit that many people, but you would only be able to go out at night when the sun is cooler which would be a bummer


Just wind down your window and you'll be alright


I got 0 for Mars but 231 million for Saturn. The answer came from something called kingdom wiki


Hilariously, Google gave the correct answer of zero while using Amazon Alexa as the source.


Obviously, the question[1] had to be asked:

"What is the population of Uranus?"

> 25 residents

> Uranus has 25 residents but no formal local government and states on the entrance sign, "It's Not a Town, It's a Destination." [3]

And it is correct, up to Wikipedian facticity standards: https://en.wikipedia.org/wiki/Uranus,_Missouri

1: http://www.bing.com/search?q=What+is+the+population+of+Uranu...



I wonder how this will develop, once people start replacing SEO with AI-Optimization, and are intentionally building websites to basically manipulate the AI results...


I can see the answers now: "To do X you need Y and... Which ABC.ai provides. Best way to do X is with ABC". Just disgusting, hehe.


Either you're getting a different result due to whatever config you're getting served or they fixed it already because I'm getting 0.


I just tried asking for the population of Neptune:

> "Neptune has a total of 32,465 people and of those residents there are 15,209 males and 17,256 females. The median age of the male population is 36.8 and the female population is 40.2. There are approximately 8,127 births each year and around 5,425 deaths." --Bing

Then it linked me to https://www.movingideas.org/neptune-nj/ But it stilled showed me, on the search results page, a picture of the planet Neptune to go with the response text.


Same here, though it's giving an incorrect number (650m) for pluto, apparently sourced from a now deleted page at https://www.nationstates.net/nation=the_kingdom_of_pluto.


This one seems to have been fixed now too.


Question, did anyone check whether the AI was prompted to generate something like this? I don't know anything about TSAM. Was it asked to, "Write a fake piece about the population of Mars"? I agree with most of the concerns you all have raised about AI, but if we didn't even check whether the AI malfunctioned, I'd say all of the comments discussing how "AI will be the death of facts" are just as devoid of facts as the AI, lol (No offense meant to those comment writers. I'm not sure if they missed this or not, but if so, it's an easy mistake to make). Honestly, I'd say most people are wrong from not fact checking their information more often than the current generation of AI is wrong from the kinds of errors AI is prone to on most non-specialized subjects.


Since I didn't see anyone link the original article that Bing pulled the information from, here it is https://tsam.net/what-is-the-population-of-mars/. This editor responsible for this piece on TSAM pretty well only posts these kind of nonsense articles (https://tsam.net/how-cold-is-ice/, https://tsam.net/crocodile-underwater/). It is funny that Bing grabbed one of his nonsense articles, but my guess is that the editor is purposefully prompting the AI to produce nonsense, which doesn't really say much about the factuality of AI.


I looked into the site. From what I could tell and given the timeline, it looked like the person was using an older model of text generation to create this spam at the time. The one about crocodiles being without scales and being slow and ocean dwelling but also being somehow active is the kind of nonsensical zero context preserving stuff gpt used to generate before the gpt 3 model released.


I suppose they trained the AI on scifi novels.


Nah Bing is just more woke than the rest of you -- this is the correct answer.


Is this GPT or is this their old engine?


Their old engine. I just posted this because it’s the first instance I’ve run into in the wild where an engine gave an answer based off likely ai generated content. The content here gives the vibe of gpt 2 generated stuff.

It’s a topic of interest to me because there’s a tipping point coming where ai generated content easily outweighs the volume of non ai generated stuff. And I’m curious of the implications of that coming change


If search engines can't find a way to filter the bad information, all the automated scraping of knowledge will become perilously untrustworthy.

I suspect we may see domain trust become significantly more important, and that may have a chilling effect on legitimate websites gaining a foot hold in search indexes. We already have this issue mind you. Most of my searches for popular topics end up with generated or scraped content in them, unless there is a domain authority with major sway that takes up the majority of results.

Maybe we will see the return of curated web directories? Or maybe the web becomes a barren wasteland as everyone flees to social applications.


I think you're on to something here. The closest equivalent I can think of is academic publishing. There's tons and tons of total garbage among some good research, and pretty much the only way to find the good stuff is to look at what people at top institutions are doing or what the top journals are publishing.


At least the AI reads the source text.

Most internet commenters read the title and immediately jump to the comments to start writing their response without ever opening the article.


Now imagine using chatGPT as the source of truth, which will eventually happen of not already. If we were tired of conspiracy theories before, wait and see what chatGPT is brewing in the hands of the "experts". How long before we see a new religion pop up from it?


The new bing get it right. It says 0 here for me.

"The population of Mars is zero, as there are no humans living on the planet yet. Mars is a large planet in the Solar system, and it has plenty of surface area for colonization, but there are many challenges and risks involved in sending humans to Mars and establishing a permanent settlement there." [...]


People were talking about how pre-AI internet content might one day be valued like low-background steel, as something that's no longer being produced in a pure enough form

Guess we're there already


The "dead internet" is here, and unfortunately there's no going back


I think there's no shortage of interesting and useful information on the internet. There's just a new challenge of filtering out the new source of background noise.


You can still serve hypertext and not use AI. There's still a place for that sort of thing. There's free real estate serving people who don't wish to become stupid.


It's intentional by design.


Design implies some kind of planning. I think it's a result of different companies making decisions that optimize for the short term.


Meanwhile all the script kiddies on this forum: "ChatGPT go brrrrr"

I am growing more and more bitter everyday by how many of us are rushing to build a deader Internet.

Every "Show HN: I have made ChatGPT even easier to use" is a stark reminder that either I am growing old and pessimistic, or for the first time in my career I am disgusted at the direction software engineering is going. Appalled by the lack of ethics and long term philosophical concern of what we are doing with AI.

I just see reckless software engineers happily playing with something they do not understand. Where is the AI equivalent of Oppenheimer's "I have become Death, destroyer of worlds," uttered because of his research on world-changing and world-destroying technology?

I am afraid I will have to separate computers from the idea of the Internet at large to keep enjoying this career: I love programming and computers, but the internet and social media are on the way to become a net negative to humanity in my eyes.


Usable AI sparks interest and therefore funding. If this leads to a large improvement of everyone's life this is a good thing, even if it generates noise in our public spaces.


I might be wrong, but LLM is capable of filtering out noise.

(unless we get content farm SEO spam to avert the model)


You mean like we already most certainly have an entire industry pivoting towards?

They poisoned the well once; from whence doth you think that with money at stake they have suddenly developed enough of a soul to grant them the grace to abstain from doing so again?


We’re too early for the Butlerian Jihad and too late to stop the hype.


been calling out the AI grift ever since chatgpt launched their stuff. Which I mean chatgpt and the LLM's are really cool and it was fun to mess around with but who the hell thought that this should ever go into any sort of actual product other than language comprehension related products?

Whoever ok'd this crap at Microsoft to get that big promo really just highlights why I'm never using a Microsoft product from teams to bing to azure.


Says 0 when I search.


i just asked chatgpt how many humans live in the solar system and it said none.


Bing says 21,894,974,404,480, based on a Wikipedia entry for 18th century scientist Thomas Dick.



chatgpt also claims the number of people on earth vary based on births, deaths, and migration!


It’s true?


Not a lot of people are migrating to or from Earth.


Maybe it's just not widely publicised? ;)


But the number is >0!


Major Tom was recorded in a studio, it's all fake



If LEO was a state they'd all work on temporary visa. Chances are they are all filling taxes at home, as if they were Americans.


The claim wasn't that people weren't filing taxes on Earth, it was "the number of people on Earth vary based on [...] and migration".


Today I asked chatgpt for some legal citations to cases involving title vii.

It gave me two cases. The first case was exactly the opposite of what it had claimed. The second case had literally nothing to do with title vii or anything involving the question I asked.

So not only was it wrong, but in using two cases that actually existed, it was maliciously wrong. Not just misinformation but disinformation.

This and other errors lead me to require chatgpt two become at least two orders of magnitude more accurate. And a huge announcement on how it will no longer make up false citations.

I, a dummy, should not be able to falsify an AI with the simplest (classic) google search.


Bing has 2.5B people on it. Change my mind.


https://www.statista.com/topics/4294/bing/ says Bing has 1.2B people on it (Key insight box, top right).


So Uncle Martin was for real after all!

;-)




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: