Hacker News new | past | comments | ask | show | jobs | submit | more happysadpanda2's comments login

Oh, this gives me flashbacks from Call of Duty: ?Advanced? Warfare (the one with Kevin Spacey as the plot antagonist).

There was this one level, which begins with a cutscene and then a car chase on a highway, eventually (after many game crashes, restarts, rewatches of the unskippable cutscene) lands you on ... I'd like to say an aircraft carrier?

Perhaps it was my hardware which was shit, but the game worked well enough on every other level, but that one. damn. level... it just kept crashing. Which lead to a restart and that unskippable. fricking. cutscene...

ugh. just ugh.


On whose authority? ON MY AUTHARAATY.


While I believe much of what you argue is correct, I do unfortunately also believe that ultimately you are incorrect:

For the non-techy part of the population, and certainly eve a percentage of the techy part of the population, there will be those who don't have an hour out of their daily lives to play around and discover the limits of chatgpt etc.

Worse still, an even larger portion of that crowd, even if they have an hour to spare, just... won't. Playing around with an LLM just isn't something most people would find entertaining or enlightening enough to do out of their own volition.

What I find more likely to happen, is that the members of above crowd will instead slowly, over time, get the occasional dose of exposure of an interaction with an LLM, and as long as the response yielded isn't batshit crazy, these people will likely develop a growing sense of confidence in these LLMs.

At that point, it will likely be very difficult proving to them that the LLMs are far from perfect.


Couldn't "proving to them that the LLMs are far from perfect" be accomplished by showing a few examples of LLM hallucination? This does not seem very difficult.


A significant portion of the American population doesn't accept the results of the 2020 election. No amount of proof will change their mind.

Everyone alive has been exposed for decades to both sci-fi tropes and media hype around AI, and the ideas that stick in their heads tend to be those repeated and reinforced by media, not by direct experience. When LLMs get rolled out to everyday users who aren't experts it will come in the form of chatbots or plugins or text summarizers or code writers, not in the form of carefully fact-checked conversations with ChatGPT. Lots of people already use ChatGPT through Bing, and they aren't likely to check what it tells them.

We see people misled and scammed every day because of their own ignorance and misinformation promulgated by both mainstream and social media. I can't begin to understand how anyone would think -- contrary to all evidence -- that the huge population of non-experts will figure out the limitations of an opaque technology on their own after a few interactions with it.


>A significant portion of the American population doesn't accept the results of the 2020 election. No amount of proof will change their mind.

This is different. There's obvious bias here. Many people don't want to believe the election is legit because of team and group mentality. People tend to believe what they want. Additionally, people can't "see" the results. It can't fully be proven because there is always a layer of indirection here where you need to trust a potential compromised source. From the perspective of the public what happened during the election can only be ascertained through a network of indirect sources so it's convenient for people to assume any one of those sources is compromised in a way such that the conclusion is closer to the one they desire.

For chatGPT seeing is believing. You can see the thing hallucinates right in front of your freaking eyes. There is no layer of indirection. There is no room for someone to lie to themselves. Additionally, the bias for chatGPT is actually in the other direction. Nobody wants to believe that an AI can trivialize their skill set. People would rather believe chatGPT is garbage because that is what they prefer to believe.

In fact I would argue this exact bias is the thing effecting many people right now. The same type of biases that make people believe the trump votes are rigged are the same type of biases that prevent people from even considering the fact that an LLM is more than just a stochastic parrot. They don't want to believe it... So they don't.


> There is no layer of indirection. There is no room for someone to lie to themselves. Additionally, the bias for chatGPT is actually in the other direction.

Bing search and customer service chatbots, for example, give a layer of indirection. Spam emails, LLM-generated legal briefs and term papers have indirection when the recipients (judge, professor) don't interact directly with the LLM. Since interacting directly with ChatGPT takes some skill and doesn't seem immediately useful most people will interact with it through things like search engines and friendly chat widgets and word processor plugins, just like programmers already interact with an LLM indirectly with Github Copilot.

> Nobody wants to believe that an AI can trivialize their skill set. People would rather believe chatGPT is garbage because that is what they prefer to believe.

They may not want to believe it, but you must have seen the numerous articles -- many of them posted on HN -- about exactly that happening. Not a day goes by that HN doesn't get multiple posts expressing fear and worry about "AI" taking over their job soon, or making the job redundant. And people may simultaneously believe "ChatGPT is garbage" and worry that they will lose their job, or get killed by a robot drone.

I argue that too many people already have a bias towards believing ChatGPT/LLMs equals AGI, because the media has primed them to believe that. The term "artificial intelligence" itself gives it away. If no one used "AI" to refer to ChatGPT et al. and instead called them large language models that might help people realistically evaluate LLMs as tools rather than as a true artificial intelligence. The term AI has been applied to so many ideas, fantasies, experiments, and now products that it means everything and nothing, and every individual can and will interpret that according to their own biases and knowledge. Of course "AI" sells a lot better than "LLMs" and we're seeing the self-serving hype in full-swing already, as numerous companies and VCs try to capitalize and recoup their losses from the last hype cycles that people got wise to (crypto) or never got interested in to begin with (Web3 and metaverse).

I'm old enough to remember when scientists successfully cloned a sheep, and immediately the media, popular and specialized, cranked out story after story about how cloning would reshape humanity in just a few years. We were told that human clones were just around the corner, with all the attendant hand-wringing. Of course that never happened, but I wouldn't find it all surprising to poll random people and find that they believe human cloning happens all the time, because the hype didn't get followed by a correction or apology.


>Bing search and customer service chatbots, for example, give a layer of indirection

There is no layer of indirection you are directly chatting with the AI. You are not having a third party describing his experience with the AI to you.

>I argue that too many people already have a bias towards believing ChatGPT/LLMs equals AGI, because the media has primed them to believe that.

No point in arguing if you don't have some form of evidence. My evidence is there isn't a single person on this thread who is fooled by AI or isn't aware of the limitations of current gen AIs.

You just need to find one person in this entire thread who fits your description, link it here and you'll be right as you falsified my statement. This is the data driven Conclusion.

Let's use data to get to the bottom of this. Seriously.


I gave my anecdotal evidence, and the evidence of numerous posts on HN and elsewhere you can easily search for. Or just look at the votes on our comments.

Getting one person to post here with one opinion or another doesn't constitute useful data. It just adds one more anecdote. It looks like no one besides the two of us pay attention to this thread.

In any case I engaged to express my opinion, not to prove you or myself right or wrong in our opinions. Time will tell.


>Or just look at the votes on our comments.

Votes are a popularity contest. I have a lot of downvotes. So you win the popularity contest. It's fine. Im ok with that.

I'm more going for the correctness contest here. Who's actually right? That's all I care about here.

>Getting one person to post here with one opinion or another doesn't constitute useful data

This isn't true. One person lends data to your case. Why? Because my claim is that nearly all people on HN aren't fooled by chatGPT. So if you say it's so common then just find one.

My claim is that it's so uncommon you can't even find one.

>I gave my anecdotal evidence, and the evidence of numerous posts on HN and elsewhere you can easily search for

I searched for this. I could not find one. You claim it's easily found, so you can win this debate by simply finding one comment that proves your point and link it here. If it's as common as you say then at least one person can be found. This makes sense.


That's just your opinion. I say we need to prove this out.

If a significant portion of the tech and non techy population anthropomorphized LLMs to the point where they don't understand that LLMs hallucinate then surely some of those people exist on HN.

If one of you readers is one such person who honestly has no idea what it means for chatGPT to "hallucinate" then let us know (and be honest, please don't troll).

My bet is no one will respond with affirmation because the amount of people who don't get it is miniscule.


You're arguing that something observed so often and consistently that it has had a name for decades -- the ELIZA Effect [1] -- doesn't actually happen often enough to care about.

I have referred to ChatGPT hallucinations with multiple friends and family, some in tech and some not (like my parents and my kids), and with one exception none of them knew what I was talking about. Like most people they think computers can't make mistakes, so it follows logically (for them) that an (apparently) intelligent machine can't make mistakes, i.e. hallucinate. I have a couple of my own ChatGPT transcripts that include hallucinations and when I show those to people they say that I deliberately misled the AI, because how could it make a mistake?

In my own experience, which includes people who work in the software field and people who don't, including a couple of friends who work with neural networks and LLMs, almost no one understands how LLMs work, or what limitations they might have, or what "hallucinate" means in the context of ChatGPT. Almost everyone I know is much more likely to believe AIs have already or will soon put them out of a job and start turning us into slaves or launching nuclear strikes, because that's the nonsense they get fed my the media.

[1] https://en.wikipedia.org/wiki/ELIZA_effect


>doesn't actually happen often enough to care about.

That's my entire point. It doesn't happen often enough to care about.

Sounds like you have some anecdotal experience of it happening to your entire family and a lot of your friends.

I experience the opposite. It has happened to exactly none of my friends and family.

We do live in contradictory universes where you experience one thing and I experience another thing. Given the contradiction let's refer to the shared experience: nobody on this entire HN thread has experienced the Eliza effect. The shared experience proves my pov.

>Almost everyone I know is much more likely to believe AIs have already or will soon put them out of a job

This first part of your sentence has a higher likelihood of being true. The reason is because there are instances of it are already happening. It's limited given the limitations of LLMs but we are at a point where if the hallucinations are fixed then it can very much replace many jobs.

Nuclear strikes and slavery is a bit far fetched.


I think you misread my first sentence.


No. You just mis expressed your point with a logical mistake.

You wanted to explain why I can't find evidence for the Eliza effect on HN, but you didn't realize that it contradicts your overall point of the effect.

I exploited the flaw to point out the contradiction in your thinking. Your ideas are not logically coherent your following a sort of bias here where you're trying to construct ideas to support your bias.


Am I mistaken in thinking that "The last of us" covered this?


Is your reasoning here that it is selfish, because not everyone can do that, and if everyone could/did, then society would collapse due to no food production, garbage collection, emergency services, law enforcement etc, as all those people would also be out painting/kayaking etc?

I ask because as I first read your comment I was unable to see the selfishness, since GP outlined that they lacked the financial means to support that lifestyle (also implying that taxes would still be paid and money would flow through the system all the same). It took me writing out a reply to you, asking how you thought it selfish, before my brain came up with the potential answer I outlined in the first paragraph.


Right, my thinking here is that the dream of running away from the very society that granted you the freedom to run away seems like you're not "paying it forward" enough, to enable others to have the freedom you'd be taking advantage of.

Rather, I would imagine the "not-selfish" version of this would be to spend your life doing those things, and finding ways to help others get to where you're lucky to be.

I just find the "I'll get mine and fuck off" attitude to be less than ideal.


> Right, my thinking here is that the dream of running away from the very society that granted you the freedom to run away seems like you're not "paying it forward" enough, to enable others to have the freedom you'd be taking advantage of.

I see it completely the opposite. If someone has already made all the millions they'll ever need, it's better for society to free up that job for someone else who needs the income instead of keeping amassing more and more millions to one single person. Paying it forward here is opening up the opportunity for the next person.


Job? I'm not sure why we're talking about a job here, I'm not suggesting you stay at your employer beyond the time it takes to become independent.


Not that you asked, but that's not the attitude I have, and not the impact of retirement. Retirement is the freedom to do what you care about without the fear of going hungry or homeless. It's a luxury and a privilege, to be sure. But it in no way is telling anyone to fuck off. If that were the case, every retiree at 65 is telling the world to fuck off. Or is there some arbitrary number at which you're not telling the world to fuck off? Is it ok to retire at 65 but not 55? Or 45? Should we work until we're dead?

A life of "leisure" still contributes to society. I would of course still pay bills and buy things, supporting business. I'd still pay my taxes, supporting the government. I'd still volunteer, like I do now outside my job. I'd like to teach, write, and create more (I like to sing, make music, art, furniture, etc). And I'd really like to start an informal class for underprivileged youth to learn computer skills and get a job without going to college, like I did.

If I didn't have to work, I'd have a hell of a lot more time to make the world a better place. In addition to the selfish kayaking, cooking, traveling, etc, that I already do with a job.


That society took from you before you were born, and "getting yours" is merely getting back to zero. You have assets and liabilities. Liabilities include a requirement for basic poverty-level food, shelter, and (if necessary) medical care. Your assets balance those liabilities around $1M. So in my view that's actually zero. In fact you're not even really a full legal person until you own property; otherwise you're on the hook to rent an apartment just to maintain a driver's license or a voter's registration.


So... noone should exercise this freedom, because doing it is selfish? Then what are you paying forward for? Freedom noone should ever exercise? It feels like a prison with transparent bars.


Giving GP the benefit of the doubt; how long now have we (at least the more tech savvy crowd) been aware of the fact that filter bubbles does exist?

It could be the case that search engines did not yield good results for GP.


I think the "used every day" point was more along the lines of a cargo / passenger jet can haul stuff all day every day, whereas (hopefully) you don't have a need to bomb the living shit out of someone all day every day.

Then again, perhaps maintenance of a stealth bomber really is horrendous enough that it wouldn't be able to fly sorties "every day".

But basically I read GP cooment as an apples to oranges comparison (since you traditionally wouldn't drop bombs using a cargo aircraft, and wouldn't airlift cargo with a bomber.)


I realize that I might be illiterate on this topic, but I always thought the "shock" in "shock and awe" was the thorough evisceration of the defending army.

If I am incorrect in this thinking, I hope someone will give me a better understanding.


Not that it correlates much to every day driving, since both speeds and driving patterns differ, but e.g. NASCAR drivers wear helmets (along with that whole neck protection setup that latches to the helmet).

I don't know, however, if a helmet may work worse in conjuction with an airbag though. So personally I think I'd stay away from helmets in cars (but I really have too little data to make an informed decision).

Having said that, in the case of this company, perhaps they could offer their passengers a "Hövding" device? (Hövding being the swedish word for a chieftain, but "hövve" is also slang for head, and in the case of this product it is a... "backpack/necklace thingy" that is a wearable airbag. Supposedly works really well, but probably comes with a price tag matching this function.


> - Browsers make text expand to fill the window with an 8px margin. On a 4K monitor with a maximised window that means the margin is 0.21% of the available space and paragraphs are usually just one long line of text. Again, it's not very readable.

Disclaimer: I'm an old coot and my hardware is lagging behind, still on a single 22" display on my desktop, etc, etc, so I am most likely missing something, AND your overall argument here does appear sound, but I do found myself wondering:

why, if you have a 4k display and massive amounts of screen realestate, do you have a maximized browser window? I mean, with my crappy hardware I can understand why I am doing it, but I assume that I am the odd one/edge-case here, not you.


Because 99% of the websites I browse are optimised for wide screens and/or contain lots of information that legitimately fill my 4K screen. I'm not going to resize my entire browser window for that one special snowflake that doesn't use CSS on their blog, I'm simply navigating away.


why, if you have a 4k display and massive amounts of screen realestate, do you have a maximized browser window?

It's the job of every web dev to write code that works everywhere. If the user has chosen to run their browser maximised on a 4K monitor that's their choice. When you write code, saying "the user is using it wrong" is never going to be an acceptable excuse for shipping something that doesn't work well for everyone in my opinion.


Where did you get that impression? Web developers aim to write code that works for most, most of the time. If you happen to be using netscape navigator or using an 8k monitor at maximum scale you won't be in a group that qa can cover. As more people get 4K more websites will support it.


>saying "the user is using it wrong" is never going to be an acceptable excuse

unless you're Steve Jobs


I am not a number! I'm a free man!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: