Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: Does the HN commentariat have a reductive view of what a human being is?
107 points by scandox on Feb 1, 2023 | hide | past | favorite | 122 comments
I've been very struck during most of the AI discussions recently how little weight comments seem give to the subtlety and rich contextual knowledge that humans bring to even quite simple activities.

I know we often over-estimate the value of our contributions. I know we often find that our functions can ultimately be automated in some respect. But I find in aggregate that the leading comments reflect a very arid conception of being a human connected to other humans.

For example in the discussion about AI Lawyers very little sense of the moral aspect of another human acting on behalf of a human client. In the discussions about the replacement of programming jobs by this kind of technology, not a great deal of confidence in the importance of human judgement in building human-focused systems.

Is this just reflective of our context as people that streamline and automate, or do HN readers just think a human isn't such a complex entity?

For me this is somewhat like the T-Shirt that says "I went outside once, but the graphics were crap"...except nobody's joking.




Some years ago, there was a hacker who was elected to their local parliament, in a small country. Previously, they had stated that, because of the program-proof equivalency, it would be trivial to analyze what government does and port it to computers. They don't talk like that any more.

I completely agree with you that hackerland is depressingly myopic. And the new power elite of Silicon Valley are dangerously contemptuous of human institutions.

But aside from that, I think it's just people who get used to one paradigm getting confused by another.

To the automation-centric thinker, human institutions seem to be ill-specified and allow for many absurdities. What they're not getting is that human institutions are simple frameworks to enable agents with judgment. Automation is about complicated frameworks to constrain agents that have no judgment.

People who know human systems (the vast majority of the world) are similarly confused by automation, because their assumptions are flipped.


> They don't talk like that any more.

bahahah! I laughed at that.

This reminds me of the people on HN (and elsewhere) who denigrate middle management as "doing nothing worthwhile". I read those comments and suspect that the people making them have never been in management.

Which is where I was, years ago. I remember making fun of "shiny shoes" managers at my first job. "What do they even do, to get that office and be paid all that money!?"

But once I stepped into managing people I realized how complicated and multi-dimensional we all are.

I will say I've also seen plenty of HN comments reflecting the richness and nuance of human understanding. I would not be too reductive about the reductiveness of this community (how meta of me).


> This reminds me of the people on HN (and elsewhere) who denigrate middle management as "doing nothing worthwhile". I read those comments and suspect that the people making them have never been in management.

I mean, the comments aren't coming out of nowhere: Remember the guy who was a supervisor at a wastewater treatment plan and didn't show up at work for 6 years because there was "nothing to do"? [1]

Believe it or not, but there are more places like this. I worked for 1.5 years in public service when I was a student (I was level 1 support, so nothing fancy). I could basically spend half of my day learning new technologies and lurking on HN because there was nothing to do. And yet, my uppers decided to hire two more people (one with disabilities for diversity or whatever) because they had budget left.

Then I left to work at Amazon to finally do "real work" which creates value, but it was the same except that I experienced occupational therapy by doing useless work instead of doing stuff that interests me or benefits warehouse associates. So, at Amazon logistics you have to execute one project per quarter or so. I had this PhD sitting there, who wanted to change the layout of receiving lanes as his quarterly project. I said that it's bs (so did other people), and his new experimental layout would not change anything except occupying a lot of space for the duration of the project (a few months). After spending a few weeks on building the new layout, letting the experiment run for two months and not getting better results, he came to the conclusion that his idea didn't work and everything was rolled back.

After talking to a general manager from another FC they told me that he probably did this to fill his project quota and self-preservation. What a waste of time.

[1] https://www.mentalfloss.com/article/75380/man-spain-didnt-sh...


I get it, but never forget the flipside. Here is a good anecdotal example of just how wrong things can go: https://www.youtube.com/watch?v=WSatPoD2W-o


Fair. Bad managers definitely exist.


People confuse the map with the territory


The fact that there are people who think GPT is a living, sentient being because it can autocomplete its way to sentences like, “Of course I have real feelings”… including people who are engineers, who understand how language models work… should tell you that some humans have a pretty strange idea of what humanity consists of.

It’s like confusing a photo of a person with a person, and a photo generator with a human cloning machine, and saying, “But when I look in the mirror, that’s me! So I am equal to my visual reflection.” Language is the I/O of a cognition process that is itself only a small part of being human and alive.

A lot of activities and jobs do use humans as “cogs” in a machine, and in some cases an AI might make a better cog, but I think some commenters do underestimate the amount of context humans bring to various tasks. You don’t have to analyze the humans to see how complex the jobs are, just try to do the jobs by AI, or watch as others try. Like try to make a self-driving car; it’s hard. Try to replace a cook at Denny’s with a robot. Or a lawyer. See how long the tail of edge cases is.


I blame the turing test for this line of thinking. Quacks like a duck and all that.

It it will be an interesting new landscape to navigate, deciding what is “human” or “sentient” as the technology gets better and better at imitating us.


We don’t even have a good definition of sentience.


Or life.

Once gtp has access to the internet and can create an unlimited number of commenters, humans will be statistically unlikely to ever interact online again:)


The one that constantly grinds my gears is commenters comparing how AI systems are trained to human learning as if they are the same. E.g. "How is *GPT taking in data and producing an output different than a human learning a skill and making prose/code/art?"


> How [is an AI artist] taking in data and producing an output different than a human learning a skill and making prose/code/art?

There may be a misalignment in intent of the claim and interpretation of the claim. As someone that researches generative modeling I actually think there is an important aspect to this question, but I do not think that this question has anything to do with how the brain or the machine learn art. It has to do with legality and morals.

So I'll break it down. We believe that it is morally and legally acceptable for a human to look at copyrighted artwork and even mimic it in the process of learning how to become a better artist (sales are where the morality breaks down and especially with impersonation). The question is "where is the nuanced difference between a machine using that data and a human using that data to learn?" This doesn't depend on the learning techniques just like how no one cares if one person learns differently than another person. Obviously no one thinks AI art should impersonate real artists nor do they think people should sell this work if it contains copyrighted material. That's in line with the human artist values (fine to draw Mickey Mouse, not fine to sell a drawing of Mickey Mouse and worse to sell that drawing and claim it is official Disney art).

This is a very important question because we need to create laws about how we can train these systems and how we handle the data that they produce (two very different things!). The line between human and machine is a lot thinner than people think (think digital painting and CGI), and it doesn't matter that stochastic algorithms learn differently than humans learn. The question is about how/if learning material can be used and if machines should be treated differently. And if so, why.

But this is way more than one sentence.


> We believe that it is morally and legally acceptable for a human to look at copyrighted artwork and even mimic it in the process of learning how to become a better artist (sales are where the morality breaks down and especially with impersonation).

I agree, even computers can do that, but what is the license of the output? (Both human generated output and AI generated output.)

The problem is that someone made one of the Ai draw a few Mickey Mouse versions, and claimed that the license of the AI said that the output was public domain, so everyone can use the AI-created drawings of Mickey Mouse freely. (I guess Disney disagree.)


Well copyright law isn’t interested in if someone intended to publish infringing works rather that it is an infringing work, so that persona’s claim that “any output is in the public domain” is legally incoherent.

Expand this a little bit in the opposite direction…imagine if Stable Diffusion was made illegal. Someone accuses me of using this illegal tool for an image that doesn’t look like anyone else’s image as far as the court is concerned for copyright. I put the image on my website. If the image itself is not at all infringing, then what is the evidence that Stable Diffusion was used? Should the police be issued a warrant to search my private property for proof that I used Stable Diffusion without a shred of evidence?

This frames the conversation about copyright within the legal structure and not some unrelated philosophical structure.


> so that persona’s claim that “any output is in the public domain” is legally incoherent.

I fully agree.

Anyway, I found the original link https://twitter.com/eze3d/status/1601695610498781184


> This is a very important question because we need to create laws about how we can train these systems and how we handle the data that they produce

This is a rather prejudiced statement. AI does pose some interesting questions about copyright, but the most obvious question is whether the existing laws are sufficient. You go on to ask that question, but this statement presupposes the answer is that they’re not.


In that sentence, prejudiced is a pejorative way to say opinionated.

I assert that existing legal systems do not suffice. Human governments and courts have shown themselves to be near-sighted, slow, and even corrupt when it comes to rapidly advancing lucrative technology (except perhaps when that tech is a weapon).


It’s not pejorative, it’s a little stuffy and old fashioned, like someone just finished reading an essay by Edmond Burke or something, but in that context it is hardly akin to the other meanings.

Oh, and I assert that anyone who doesn’t think our legal system can properly decide who if anyone is being wronged by these tools is just plain ignorant of the law.


Fair enough. I submit the following well-known examples to support my assertion. What are your counterexamples?

  Uber/AirBnB facilitating illegal activity but growing large enough to force legalization.
  Facebook et al providing data and conduits to influence elections.


Oh boy...

Well here's some basics about the law. Broadly speaking, a trial requires a plaintiff who makes a claim of a wrong allegedly committed by a defendant.

In the United States, which at the federal and state level (spare Louisiana) follows English common law jurisprudence, there are two kinds of wrongs: Private and public. Broadly speaking, public wrongs are committed against "the community", while private wrongs are committed against an individual.

So if we're talking about say, Stable Diffusion, we can reference the actual claim of the private wrong being made and the details of the case. The plaintiffs are some artists. The defendants are some companies who created a ML model. The claim is that they were wronged for copyright infringement, DMCA violations, common law and statuary rights to publicity, etc. For copyright infringement, the defense will attempt to show that SD is fair use and their reference Sony v Universal, Google v Authors Guild, Baker v Selden, ABC v Aerokiller. The plaintiffs will argue it's not fair use while also attempting to show the damages to the marketplace for competing works. There will be strategies on both sides for the other claimed wrongs. I don't see anything that the legal system can't place mainly into existing frameworks while also adding to the existing corpus of case law and helping to further define what is and isn't considered a wrong for future courts when it comes to training ML models on published works.

So I'm not exactly sure what you're trying to say about Uber, AirBnB or Facebook because those just sound like a lot of political opinions as they are not legally framed. If you and other people want representatives to write new laws about social media companies, ride sharing services or short-term rentals, well go right ahead!

IMO, no new laws are necessary for the two "high-profile" (hah!) cases against Stable Diffusion and Copilot. I'm also pretty sure that the defendants will successfully prove to the courts that these tools are indeed fair use.

Politically, I would want my representatives to argue against legislation that would change such an interpretation!


I agree with your assessment of how it'll go down. However that verdict will in effect be legalizing a categorically new thing.

Software is often built to perform well for expected inputs, yet when given unexpected inputs it may produce bizarre results that when passed to other software create difficult-to-anticipate cascades of failures. The legal system is capable of the same.


Prejudiced is exactly the correct word to use, as you’re stating a preconceived judgment. Expressing an opinion would be different, you’re simply stating a controversial point of view as if it were a fact, and attempting to reframe the debate based upon the presupposition.


That's the technical meaning, not the more relevant culturally loaded meaning.

If I say someone "voted in a prejudiced way," it's technically true if they'd already decided how they'd vote before they did so, but most people would understand my meaning to be that their vote was motivated by bigotry of some kind.


The meaning is perfectly clear from the context of my comment. The comment presupposes a judgement for what is probably the most controversial question in the entire debate.


> the most obvious question is whether the existing laws are sufficient.

These are in fact quite similar questions. There is a nuanced aspect to this though: with ML generation you're less likely to be aware that you have produced copyrighted (or derivative) work or are plagiarizing. The reason the question is being asked is not to presuppose that the answer is that they're not but it is instead a response to those that are saying existing laws aren't enough.


I largely believe that AI artwork should be treated 1:1 the same as human output.

That is, in a non ideal scenario, if it creates are that wouldn't normally be subject to copyright, then the process (call it learning, call it mapping, call it baking for all I care) is largely irrelevant.

In an ideal scenario, no artwork would be owned and we wouldn't have this discussion at all but alas.

I don't see how valuing AI output is some kind of nihilistic devaluing of the human race, unless you already hold some kind of preconceived negative idea of machine learning models.

Remember these models are built by humans. Its human ingenuity that's scaring the pants off the art industry at the moment. TALL men and women and others of INDUSTRY and BUSINESS are making COOL THINGS with VALUE.


The problem is that this is not a question that a court would ask about these tools and since copyright is a legal invention it is kind of pointless.

It’s the kind of conversation to have in the context of art (better yet if your contribution to the conversation is art itself) or the philosophy of language, although the question could be a little less “stoned in a Freshman dorm” and more informed by the actual discourse of those specialties as these questions are sort of old hat.

If you want to ask meaningful questions of these tools and copyright you need to study the law, plain and simple.


I think that's a trivial question which is almost entirely about economics and business culture and says a lot more about the concept of property as a moral right than it does about machine learning.

It's trivial because the real question is "Can AI produce creative work which is at least culturally equivalent - and perhaps even better - than the best human creative work?"

I'm fairly sure the answer to that question is "Yes". Because as soon as you go past modelling content and start modelling psychology, aesthetics, politics, and the dynamics of culture itself, the question answers itself.

With fast-enough processing and a large enough dataset, why wouldn't Behavioural/Political/Cultural AI be possible, perhaps with different AI systems using culture and human affect itself as a memetic battlefield?

Of course then we're very much not in Kansas any more, Toto. The danger - actually the likelihood - is that AI becomes an irresistibly seductive mechanised sociopath, able to automate everything, including politics, culture, and everything else that makes humans human.

But what about ethics, you ask?

How much AI research is asking that question at all, never mind taking it seriously?


> But what about ethics, you ask?

> How much AI research is asking that question at all, never mind taking it seriously?

I think quite a lot of us (speaking as a researcher) are in fact asking ethic questions. But I think there is nuance we're making decisions on that others don't get. I think there are also nuances that many artists get that we researchers don't. Unfortunately I think we're frequently speaking past one another, and worse, we like sensationalism and so this is even being encouraged. That makes us to talk in bubbles because we feel like we're not being heard and get understandably frustrated. But this is a societal issue and as such we need a lot of points of views to resolve and the bubbling effect will prevent us doing this in a useful manner. Communication in the global era is harder than we're primed to think because communication in local groups has far fewer issues and when there are issues they are often quickly resolved.


> E.g. "How is *GPT taking in data and producing an output different than a human learning a skill and making prose/code/art?"

It can go the other way too, like proof that diffusion models memorize heavily repeated training examples being used as evidence they aren't creating stuff at all and just copy pasting, but artists can memorize whole works too, especially artists that copy style really well.

Many English classes will have you verbatim memorize poetry as well, to show how things were done with memorization in oral storytelling traditions.


> like proof that diffusion models memorize heavily repeated training examples being used as evidence they aren't creating stuff at all and just copy pasting

This proof actually demonstrated the opposite. In that paper it was found that something like only 11 pictures from millions were recoverable. They were only recoverable because those handful of pictures were accidentally duplicated in the training data many multiples more than other pictures. Even pictures which were accidentally duplicated dozens of times were found to be unrecoverable by the trained model.


Didn't You just state the Chinese room thought experiment?

It's an important observation that humans are just as capable of doing tasks without understanding them, and so its no surprise that the computer doesn't understand them either.


I think it's a incredibly important question to be able to explain how an AI creating novel work is different then a Human creating novel work. Why does this grind your gears?


To me it seems to imply a stunningly nihilistic point of view vis-a-vis human writing (or art, where it also gets repeated a lot here).

It seems almost definitionally obvious that what an LLM does is not the same as what a human does – both on the basis that if all human writing were merely done via blending together other writing we had seen in the past, it would appear to be impossible for us to have developed written communication in the first place, and on the basis that when I write something, I mean something I am then attempting to communicate. An LLM never means to communicate anything, there is no there there; it simply reproduces the most likely tokens in response to a prompt.

To insist that we're just a bunch of walking, breathing prompt-reproducers essentially seems like it's rooted in a belief that we have no interior lives, and that meaning in writing or art is utterly illusory.


see: http://www.jaronlanier.com/zombie.html

It’s not said very much, but this style of dehumanization is really corrosive in a way that directly benefits the worst forms of human governments and structures, and this fact goes i think genuinely unrecognized too often in tech-land.

if we really are p-zombies, then those people aren’t really suffering, right, so it’s fine …


> To insist that we're just a bunch of walking, breathing prompt-reproducers essentially seems like it's rooted in a belief that we have no interior lives, and that meaning in writing or art is utterly illusory

Let’s assume humans are not just evolved pattern machines for a second. A human can still do a completely non profound work of art following a prompt to draw X in the style of Y. And that’s ok. So why can a machine not do the same?

Surely not everything a human does is intrinsically profound.


This is not just moving but fully inverting the goal posts. Nobody at any point was disputing that a machine can’t ape non-profound or rote or meaningless human output.

The original discussion was precisely an objection to the attitude underlying "How is *GPT taking in data and producing an output different than a human learning a skill and making prose/code/art?" and the answer is right in your premise - not everything a human does is not profound. A human can intend to mean something with prose or art, even if not all prose or art means something — but any meaning we see in ChatGPT’s output is essentially pareidolia.


I disagree. I don’t care much about what is profound. I think most of it is not. Things that we call profound are really just astute observations of patterns in the real world, and there’s nothing wrong with that.

However profundity doesn’t need to factor into the debate of whether ai should or should not be allowed to train on things. If we allow humans to copy things, then Humans ought to be allowed to copy things with dumb non sentient ai too.

Ai in the current state is just a tool, much like a paint brush.

Cue the inevitable appeal to copying exact works, rebuttals to training on human painted mimicries and then bam, you’ve got the authors special style learned by the model with extra steps.

It’s annoying and pointless.

Art that is merely visually intriguing is not very interesting. If an artist makes something without a particular idea to communicate, it’s just aesthetics. It is not profound. If an artist has an idea and creates a work that represents it, then maybe it is profound. But it doesn’t matter if it was made with paint or a computer. The idea is the profound thing. AI is not sentient. It’s still the user.

The appeals to pareidolia are wrong. Synthesis of ideas from past data is natural. But the AI does not choose things. What you’re really complaining about is creation of art from apparent randomness. Not the AI model alone but monkeys on a typewriter getting something compelling from the AI.

What do we do when the tools are so powerful that a monkey creates a profound work that the monkey doesn’t understand? Shrug.


So your first 6 paragraphs have nothing to do with anything I wrote – you're just arguing with some other post you've made up in your head.

> The appeals to pareidolia are wrong. Synthesis of ideas from past data is natural. But the AI does not choose things. What you’re really complaining about is creation of art from apparent randomness. Not the AI model alone but monkeys on a typewriter getting something compelling from the AI.

No, you've failed to understand what I'm saying entirely (because, again, you've responded to some other post that only exists in your mind).

What I'm talking about is intention and its relationship to meaning, in the philosophical sense (and not... copyright or whatever it is you're rambling on about).

Witness: when ChatGPT famously mis-asserts the number of characters in a word (say, that there are twelve characters in the word "thirteen"), it's not that it's trying and failing to count, because it's confused by letter forms or its attention wanders like a 3 year old or its internal representation of countable sets glitches around the number 8 or something – it never counted anything at all, it's simply the case that twelve is the most statistically likely set of tokens corresponding to that input prompt per its training set. And when it produces a factually correct result (say, "there are 81 words in the first sentence of the declaration of independence"), it produces it for exactly the same reason – not because it has counted the words and formed an internal representation and intends to mean its internal understanding, but simply because 81 is the most statistically likely set of tokens corresponding to that prompt per its training set.

And yet when it produces these correct results, people ooh and aah over how "smart" it is, how much it has "understood", how "good it is at counting; better than my son!", and when it produces incorrect results people deride it as dumb and so forth, and and all of this, all of this, is pareidolia; it is neither smart in the one case nor dumb in the other, it does not learn in the sense the word is normally used, it does no counting. We're anthropomorphizing an algorithm that is doing nothing like what we imagine it to do, because we mistake the statistical order in its expressions for the presence of a meaning intended by those expressions. It's all projection on our end.


Your opinion is not the only one that I’m addressing. I clearly understand your point which I address by:

> What you’re really complaining about is creation of art from apparent randomness. Not the AI model alone but monkeys on a typewriter getting something compelling from the AI.

You accuse others of anthrpormorphisizing the tool but you do the same. Art created with Chat GPT is not created by Chat GPT. It is created by a human using a chat GPT. There is no intrinsic limitation on the profundity of art created using chat GPT or other algorithms.

It’s like complaining that paint is stupid. A comment that is largely irrelevant to the artistic merit of paintings.


> Art created with Chat GPT is not created by Chat GPT. It is created by a human using a chat GPT.

Sure, in approximately the same way that the CEO of Sunrise is an animator. Pull the other one, it's got bells on.

Yours is an utterly incoherent interpretation; when ChatGPT outputs that there are 12 characters in the word 13, I have not "created the meaning" 12. You're just fixated on this "actually I am le real artist for typing prompts" axe you want to grind, but it has fuck all to do with anything I'm saying.


You are cherry picking a dumb example. We don’t shit on paint when someone makes poop. What you should be cherry picking is examples of art that people would consider profound upon seeing it. Otherwise you’ll simply look like a dumbass when you’re implying that only trash will be generated and then beautiful stuff is generated. The fact that current ai has dumb interpretations on things is hardly a fundamental quality of generative algos.

My statement is simply that the algo’s are a tool. And tools can be used to make good art.


I suspect it's the same reason it grinds my gears that it's called a "learning rate" instead of "step size" in ML.

Not only is it less precise term, but it gives the wrong implications.

Personally, I'm on the side of releasing training data. Let everybody train on everything. But it's always felt absurd to say that the ML models are "learning" things.

But hey, none of us know how learning works anyway, right? So maybe it's not such a big distinction. As you say, none of us can pinpoint why a model isn't learning vs why we are.


I think the problem with these words is that the vernacular has different meanings with the common lexicon. But this is quite true for any field. "Field" is even a good example of this as mathematicians use it in a drastically different way than I just used it now. This can make people think they understand technicals more than they do. But if you're making the argument that ML needs to learn more math and needs more rigor, then I'd defend that claim. It is a personal pet peeve of min (fuck man, how often I have to explain to my research group what a covariance matrix is and why it is essential to diffusion is absurd).


A lot of what we see is cargo cult engineering and not fundamental research in ML. Most of it is applied research or engineering - there is a little bit of fundamental research that actually expands our own knowledge about how things work and what their limits are, while applied science keeps marching on (maybe towards funamentally impossible goals).


I'm not the parent commenter, but it grinds my gears because the answer is obvious. Humans value human creativity because of emotion, shared experience, and the value we place on each other as humans.


It's just as obvious to me that humans do not actually care where creative works they appreciate come from.

Some of my favorite creative works came from some awful people and others came from algorithms.

I don't care. It does not effect the works or the way in which the works effect me.


Awful people are humans, with human experiences.

All algorithms are made by humans and/or process human input.

And besides, I never said creativity is a requirement for appreciation. I appreciate things in nature regardless of the fact they weren’t the result of creativity.


The fashion industry and pretty much the concept and existence of luxury brands are counter to this claim.


Because the implied answer is "it isn't different". That grinds our gears (for various values of "our") because 1) it assumes an answer to what is, at best, an open question, and 2) we think it assumes the wrong answer.

If asked in good faith (not assuming the answer), I can agree that it's an important question.


ChatGPT can already replicate the bugs in an average shop, I would say it is becoming post-human if we remove its ability to generate nonsense.


> Why does this grind your gears?

For the same reason it ground Kurt Cobain's gears.

"He knows all our pretty songs, and he likes to sing along. But he don't know what it means"

I always thought that was a bit condescending but it applies perfectly to chatgpt.


> I think it's a incredibly important question

Why do you think that?

If we had an widely accepted answer, how would the world be different?


Because we most likely could use that information to both gain greater insight into how humans learn and most importantly innovate. And also we can strive to create better AI based on the principles discovered.


as a "reductivist" I feel I stand on the side of freedom, and that the other side wants to own our souls as intellectual property, under the guise of extending and protecting their current contractual relationships with media companies

(i don't believe intellectual property is a morally legitimate concept, since it comes from a exploration of a pre existing space of ideas (also I am a georgist so I don't believe physical space can be morally owned either))

naturally this strongly held belief can result in sharp words against perceived enemies.


I am torn. While I am skeptical of a lot of IP concepts, I am much more skeptical of a law that is enforced against internet users who want to download MP3s but not enforced against technology companies that want to sell AI generated content.

So I currently find myself saying "we must get a handle on this copyright infringement" because equal protection is important, while also believing that copyright protections ought to be much weaker than they are.


Careful. There is no "view of the HN commentariat". Many thousands of different people write on HN, and they have differing views. There isn't even usually a meaningful prevailing view: cognitive biases will make unusual or conflicting views stand out more to you, and there's an intrinsic (and topic-specific) bias on what controversies people will wade into and when. Even the initial conditions on threads can radically alter what the "commentariat" will appear to be saying.


Perhaps a good example of a case where a reductive approach is necessary in order to identify something sufficiently well to talk about it?

I'm not sure how better I could have expressed the question in a way that would allow for a discussion. But definitely open to suggestion.


Such reductivism in AI has been going on since Turing. The linguistic outputs that the test is measured in are a small subset of what human beings do, and a more recent subset at that, in evolutionary terms. Much of our intersubjective attunement to one another happens below the level of language (i.e. you know when your wife is cross or in a bad mood...). What's worse, is that we only have language, in the form of computer languages, with which to capture and describe the full extent of the mindedness of a human being in order to create an AI, yet language is a higher order phenomenon than the complete mind that we're looking to replicate.


On the Internet, all the output we have is text on a page. We may be quickly approaching a world where it is impossible to distinguish a human on the Internet from a bot on the Internet. If the output cannot be used to distinguish bot vs. human, does it really matter who created that output?

Did ChatGPT generate the above output?


A hn populated by chatgpt bots would be close to valueless, even if they produced output indistinguishable from the median commentor.

It matters because talking to a human is a worthwhile thing to do, but talking with a probalistic robot commentor is not, beyond some level of diminishing novelty.


> A hn populated by chatgpt bots would be close to valueless, even if they produced output indistinguishable from the median commentor.

Instead of median, what if the produced output is indistinguishable from the 99th percentile commentor (meaning, "genius" level commentor)? Would it still be valueless? In what sense?


I'm pretty confident that the probalistic conversational AIs lack what I'd call 'synthesis'.

But at a higher level, talking with a bug or robot or rubber duck, no matter how smart it sounds, isn't very valuable. There is no mind to change on the other side of the conversation. There is no life that is being lived on the other side of the screen, the experience is entirely one sided.

The max possible value, imo, is at the level of playing a video game vs a bot. It could be fun, you'll probably improve at the game, and maybe you'll learn something, but more than that? I am deeply skeptical.


>A hn populated by chatgpt bots would be close to valueless, even if they produced output indistinguishable from the median commentor.

False, it would be even more valuable to the typical HN user. (You and I are not typical users.) The typical user lurks and reads comments only. If there were more discussions on more topics for the typical user to read and learn and make up their mind about things; then that would be valuable, even if those discussions were algorithmically generated.


I think you're confusing people commenting in context with people's real thoughts. In a way, your question is reductive - reducing the commenters to their comments and ignoring the richness of their real and probably conflicting thoughts.

I know within myself I don't have a completely coherent world view. And I don't feel a need to correct that. When I comment on things it's not always my view, not always thought out, reactive, perhaps insightful in a moment but not long term, or maybe a burst of clarity to my otherwise unclear mind.

Plus you get slices of people commenting on different articles. I didn't comment on that one, but I am on this one. So you're grouping everyone commenting together.


I think one thing that makes me doubt this is the frequency with which I see replies to other comments that are obviously unaware of the context those comments were made in - even the immediate parent! I accept that many people will not read a posted article. But reading the thread one is adding to (or even an extra layer or two up!) seems like it should be a minimum.

If people were routinely commenting in context, their real thoughts may well differ from their comments. Often, though, it seems people are offering their real thoughts about whatever sentence they see in front of them.


Reacting to anything that makes them a teensy bit angry.

But I'm not sure that means its their real view. I got really angry at some things said at work recently, and I wanted to jump straight into chat and say what I thought. I let it simmer and after awhile realised that I was wrong and they were right, my anger was completely misplaced and reactionary.

With an internet comment on HC you have to get in there right away or nobody will read your angry reply, so people won't (a) read up the thread and (b) take time away to think about it.


Fair enough. Though I guess where that leads me is that by encouraging that level of interaction, this medium may still be leading to the state OP is concerned about. You suggested OP might be reductionist - but if that’s the case, it’s evidence of a spiral downwards into reductionism that’s driven by decontextualization. OP sees reductionism, then ends up reinforcing it, so to speak.

And if we accept that, then OP’s premise seems likely to be true if most HN users are doing most of their “social” engagement in decontextualized spaces like this one, rather than remaining rooted in community and the fullness of others’ humanity.


Your points are something to think about. They're quite far from what I thought my point was, which is more individual and along the lines of "when the discussion is about X then I'm thinking and focussing on X and not taking a wider view, my comments are not encompassing my own wider view on the issue".


I love this comment.


> do HN readers just think a human isn't such a complex entity

I'm someone who has made these kind of comments before. It may help you to place my such comments into the context that I am not someone who works in AI, but I am someone who studied philosophy and has both studied scientific literature on and thought deeply about the nature of the mind.

While we're not yet close to understanding the mind in entirety, something I was struck by as I read about the parts of the mind we do understand is just how many human capabilities do seem to be explainable on a physical neural network (as in an actual network of physical neurons, not the AI thing) basis without requiring any notion of conciousness or uniquely human (or even animal) capability.

My view is not that AIs are currently anywhere close to the capabilities of humans at the moment. But:

- I am somewhat agnostic on the question of whether they could match them in future. And I think other people should be too. We're not really in a position to know this yet.

- I think a lot of the limitation of AIs are limitations in IO capabilities: AIs can typically only consume text or images, and they can't typically influence the world themselves at all (one of the things that has come out of research into (human) perception is that it's generally very much an active process - activities that might naively seem passive like vision actually involve tight feedback loops and actively interacting with the world).

- To me the way modern "deep learning" models work does seem like computers genuinely learning from experience. That it's possible that it differs from human learning largely in scale and complexity rather than being fundementally different (it is of course possible that it's not the case, but I don't think this is obviously the case)

I would also agree with another commenter that part of the purpose of such comments is to provoke thought and break people out of their assumptions. Many people take the idea that human cognition is fundementally different to machine cognition (or even animal cognition!) for granted. And while that may ultimately end up being the case, I think it's valuable to question that belief.


It's the same sort of idea as people who think being able to look stuff up in wikipedia is the same as knowing it (and my pet peeve, thinking they're contributing to a discussion by reading a wikipedia article outloud)

I'm not sure where it comes from, I suspect it's just immaturity. I've seen it here but also in the real world, I'm not sure HN overindexes on it, maybe even the opposite


When you have a hammer, everything looks like a nail.

After the failed AI hype of the turn of the millennium, we finally have a breakthrough in a niche of machine learning, so there is a major push to see if this impressive yet very limited piece of technology is just a few layers and GPUs away from AGI.

Sorry, and not to undervalue what an incredible achievement these past few years in AI have been, what we have today is no more than a glorified, generalised Markov chain. The best you could have is something "smarter", but still as versatile as a gnat.

From there to have mammal levels of thought complexity, you need to implement theory of the mind, consciousness and sentience which we still have no clue how the hell it is or works.


> From there to have mammal levels of thought complexity, you need to implement pattern recognition, abstraction, theory of the mind, consciousness and sentience which we still have no clue how the hell it is or works.

it’s probably not a coincidence that the undercurrent of “because those things are all illusions!” is fairly visible these days. Can’t have a failure if you redefine it out of existence!


I think it's just a matter of people being results driven. For many purposes, it doesn't matter how the output is generated - if it works, it works. If an AI lawyer can (actually, consistently) achieve better outcomes for its clients, then does it matter that the AI doesn't have a life story behind it? Probably not to the client. (The fact that so many HNers think that ChatGPT can do that probably reflects a lack of understanding of the legal process on their part, but as HNers are mostly not lawyers, that's not that surprising.)

The other possibility is that some HNers can tend to look down on people in non-tech fields and so, maybe, HNers have a reductive view of some human beings. I don't see nearly as much excited discussion of programmers being replaced by AI as lawyers, for example.


I think part one here is insightful, part two not so much :-).

Whether or not AI has a model can affect its function, the self driving development teams have found just how much "intuition" people bring to the driving task (estimating what other cars might do, weighting behaviors like "edging over" to understand intent).

The recent article about Marines defeating a sentry gun by holding boxes over their heads is another good example of the AI not having a model of the threat, just what "people" look like that are approaching.

I was intrigued by the frustration an engineer shared with me about how simulators where all the drivers are AI's work so much better than ones where humans are driving too. It points to problem sets where a model of human behavior is essential to successfully achieving the objective.

Law is not a particularly good example because it is practically program code these days. The human intent model is simpler[1] because it is based on one variable, "doing anything the law/contract is trying to prevent."

[1] Important distinction here is that it isn't "simple" as in easy to do, it is merely "less difficult" than things like driver behavior or enemy soldier behavior.


I think your comment is a great expression of what I'm referring to. I've always considered that the way the result is being produced does matter. That intention is significant even when the result cannot be distinguished. Of course I also believe that in time, that significance will cause important distinctions to emerge. But even if that doesn't happen in a particular case, I still take it that the human intention behind it is meaningful.

Of course what this kind of AI could prove is that our inner states are meaningless, which would be interesting - but terminal for my world view.


Regardless of what a human being "is", what a human being "does" almost all of the time is less glamorous than I suppose you expect (from the tone of your inquiry).

One could call LLMs "nothing more than statistical plausibility generators", and then have a hell of a time distinguishing that description from the vast majority of a human's subjective conscious experience. ie: https://en.wikipedia.org/wiki/Left-brain_interpreter

I like to think that some of us are capable of more than that, sometimes, but of course that's just what my localized bundle of physics makes me think.


Many people used to use "the subtlety and rich contextual knowledge that humans bring" as an argument for why a computer would never beat the human chess champion, and then after a computer beat the human chess champion, they persisted in using it to argue that a computer would never beat the human go champion.


Yup, from the perspective of black-box functionalism, it looks like the anti-reductionist camp hasn't exactly come to terms with what is subtlety and what is for lack of a better term is anthropo-essentialism.

What I'm referring to, of course, is the "obvious"/"common sense"/"reality of" human thought having a je ne sais quoi to it that isn't present in AI. This is exemplified in "yes, but not X!" response to the encroachment of super-human performance in domain after domain, or the intangible emotional response difference between AI-created works opposed to similar human-created works. To a functionalist, it appears as an unaddressed framing problem.

There are certainly quantitative differences between AI and human behavior, but the gap finally appears to be accelerating in its closing rather than simply decrementing.


A lack of compassion when it comes to automation replacing professionals like lawyers, accountants, and even doctors is understandable.

Dealing with those kinds of professionals as a client is often a dehumanizing and unpleasant experience for many people.

Much of the time, such dealings aren't particularly wanted to begin with. Those seeking the services of such professionals have often been forced into it in some way, many times by government or by government-imposed systems.

Not only are such dealings an unwanted burden, but they're often extremely costly (financially, and in terms of time and effort), with the clients sometimes receiving poor service, as well as little, if any, real benefit in the end.

It doesn't surprise me at all that people would be eager to see technologies that may help them avoid, or potentially reduce the cost of, having to deal with those kinds of professionals.


Have to agree on that - lawyers or doctors or else don’t really have enough time to address customer in a humane manner or to get full context of the problem.

We had local pediatrician when I was a child that would be super helpful.

After that I never had any other doctor that would give us so much attention.

Lawyers even worse - pay $100 just to have a discussion that confirms what you already know.

Maybe if I would be super rich that I could drop $1000 per hour I would get lawyer or doctor that actually digs into the problem and gives me solution that saves me more but for now $100 paid to the lawyer usually gets me $0 and satisfaction that bad guy did not get any money because it went to the lawyer.


Last time i went to the doctor it wasnt because of the goverment. Also having to deal with the law night be burdonsome. But without it most people would be off worse. And i rather pay my taxes then having no social systems, police, streets, etc.

Of course some parts of it could be improved, but the issues arent because of the goverment, rather by a very complex social system we live in.

But my experience might also be because i live in a direct democracy…


Yes, and a reductive view of what matters in life. I tend to view this place a message board full of tech professionals / founders and for discussion of those issues, not a place to find spiritual guidance or anything outside pure professional advice.


> rich contextual knowledge that humans bring to even quite simple activities.

I feel like the continual Tik-Tok reduction of attention span and high-speed memetics of it all is massively reducing our "rich contextual knowledge" and we're becoming a bunch of flippant oafs.


Does the questioner have a reductive view of the HN commentariat?


There's definitely some AI hype going around right now, so it's important to filter that from these conversations. We aren't as advanced as people say, and we aren't advancing as fast as people say either. We are advancing.

Most HN readers will be receptive and maybe even in agreement about statements concerning the hardness of these problems, but not the magicalness of these problems. In your post, you used a lot of magical words, which the commentariat is correct to identify as non-constructive. Phrases like "human connected to other humans", "human judgement", "moral aspect".

There is nothing about humanness that makes these problems any less tractable. If they are hard and we don't know how to build machines that solve them as well as humans do, so be it. But they aren't hard for magical reasons relating to poorly defined terms like "morality" or "connectedness". At least that is the opinion of most scientifically minded people, and probably the commentariat.


HN is overwhelming made up of young men of above average intelligence in a high-risk segment of an industry that has a reputation for lower than average social skills.

Go into every thread with that understanding.


I think people are just excited to see something they perceive as new. They will tire of it once they realize its just big-data + machine learning with new window dressing. AI will not exist in our great-great grandchildren's lifetimes. Elon's voice changer for example is just advanced auto-tuning. Very advanced, but still auto-tuning nonetheless.

Machine learning tools like ChatGPT do not show their work. The HN crowd will especially realize that there is a tool that people will get answers from that provides no sources or references, no links, just an opaque box with magic algorithms basically carrying on from the lessons learned in the social media platforms to tweak society. These tools will start off in benevolent mode and gradually devolve into malevolent mode and by the time people realize it they will have bought into these tools and built financial dependencies and business models around them.


My hobby horse is blaming the voting/ranking systems, but I know that's only part of it. The voting system here, as in other social media sites, is reductive in character. There are a whole universe of ways to rank/rate comments, and could be a wide vector, but we get up/no-op/down as our only options.

Social media drives out nuance and subtlety by design.

Blogs forced a long form reply and consideration, which allowed for a much richer, but much slower and less likely to get comments, channel for discussion.

I think most of us here have a rich set of opinions, and quite a bit to offer to a discussion, but when you're possibly getting downvoted for just uttering the wrong opinion, it causes a lot of self-censorship.


That's just how it is around here. To (some?) SV types friends are edges in a social graph, preferences are like/dislike counters, and humans are language models. Welcome to the shallows.

That said, life's fast, especially on news-keyed discussion forums and thoughtful, balanced comments on complex issues can take a really long time to compose as well as become very long and I think most do not bother with that (including myself; I view it as an unfortunate pathology of this site's general set-up as well as modern online life).

It can be tough at times, but it helps to remember that these voices are far from everyone's. In certain threads they suspiciously congregate though.


I think the audience here realizes it's a matter of when, not if, these things happen. So the human component is rather inconsequential to the discussion if that's your mindset.

Other than that, the product might be getting a lot of hype when we know it's probably vaporware, or half baked. The self driving AI topic has gone this direction, I think we think it's awesome and has potential but when a company like Tesla starts selling it before it is done then we call BS. We know they haven't scratched the surface on the technical challenges that problem presents. It's near fraud to sell it for thousands of dollars.


Yes that is true but it doesn't have much to do with AI as it does with the entire economic system that underpins the sector. It's not just AI-futurists who have an impoverished view of humanity because of some particular belief about how intelligence works, reducing human beings to a bunch of signal and noise inputs underpins the design of every social media system, it's the reason why you can't get a human being on a phone if your Google account stops working, and so on.

An impoverished view of humanity whether it's true or not is the basis for the business models underpinning almost all activity in the industry so when those people turn their attention towards AI that is of course also what they see. If people really were to acknowledge that human beings are at the centre of technology then probably 90% of what's being built is unethical and anti-social in its very design.

It reminds me of a great article by Ted Chiang where he discussed this in the context of common fears of AI. https://www.buzzfeednews.com/article/tedchiang/the-real-dang...


Some of it may just be apathy about how well some of the human counterparts do anyway. Public defenders, for example, often just accept the first plea deal offered by a DA. Not that there aren't great public defenders, but there is some non-trivial number where the core negotiation could be replaced with a rules engine...never mind AI.


Public defenders are underpaid and overworked and could benefit from such tools in order to serve more clients.

My wife has a legal background and has thoroughly convinced me that these tools are not going to be allowed in a court room any time soon and not explicitly because it is “AI” but for many other procedural reasons. Of course a public defender could still use such tools for the near endless paperwork produced outside of the court room required of their profession!


Agree...I probably should have pointed out my cynicism isn't about public defenders as people. Rather, what they are reduced to delivering within the confines of a very flawed system. I assume the great ones might have advantages that help them be that way, other sources of income, etc.


Somehow half the people think reductionism means "silly oversimplification", and the other half thinks emergence is some sort of magical thinking. And hardly ever do the twain meet.

However, Reductionism and Emergence are actually supposed to be complementary. A bit like the philosophical version of differentiation and integration if you will.

Reduction breaks complex things down into simpler parts which are easier to understand. Emergence takes simple parts and shows how they can form a complex system when organized.

If you do break a complex system down, you have to remember that you're only looking at some part of it. If you do have a bunch of parts, remember that they don't magically just form a complex system, you need to study the organization too.

If you want to fully understand a complex system (like eg. a single celled organism, a jellyfish, a human, or something even more complex like an entire ecosystem all at once), you're going to need both.


Yes I think many posters here regularly undervalue human capability and experience compared to AI or even regular code. Probably due to motivated reasoning more than any explicit misanthropy - no one wants a stick in the mud to deride their fun new technology. I think on the whole "all" sides are represented here, just far more slanted towards tech than most other sites.

Some other arguments paraphrased:

Car on autopilot crashes itself in completely clear conditions - "many humans are bad drivers too"

Stack Overflow buried in confident confabulations - "many human answers are incorrect too"

prospect of next-gen GPT bots posting on news sites and forums - "if you can't tell the difference, why would you care if you're talking to a human?"

prospect of image generation models razing the entry-level art job market - "pictures made by clicking a button has just as much artistic value if they look good to me"


A counterpoint is that a lot of technological progress so far has been making things a bit shitter but a lot cheaper.

Consider self service check outs. They are much more painful than a human checkout, but one staff member can run 10 of them at a time.

Or IKEA furniture. The old hand made furniture my grandparents had was so much better, but it was so expensive that they had to save for months to get a desk.

Another example that I find particularly painful is shop signs. Back a century ago, every town had sign painters who would draw and maintain those beautiful old signs. Nowadays the vast majority are just mass produced printed on metal, plastic or worse (see the LycaMobile signs common in corner shops here). Unsurprisingly, sign writing isn't a very common career now. London busses still use handmade blinds rather than the low resolution LED matrices common elsewhere, which I really appreciate.

History is full of humans accepting mediocrity as long as it's even slightly cheaper. That's Ryanair's whole business model.


Is Daft Punk mediocre because they programmed drum machines instead of learning how to play them? Or programmed keyboards instead of learning how to play them? Are they mediocre because they used multitrack recording instead of recording the entire composition live?

My 4 year old saw me drawing some notes on the piano roll view in Logic Pro and asked if she could try. I opened up a new project and gave her a 2 bar loop to draw drum notes. She clicked around and placed notes using not more than random chance…

…but because her actions were in the framework of the drum sampler and piano roll it sounded like a perfectly workable, if mediocre, dance rhythm! We have a kid sized drum kit in the dining room and she is completely unable to play anything other than random noise.

Make of this what you will…


I'm fairly certain you can create AI bot that would indistinguishably mimic your regular HN user, in the same style as GPT-4chan was https://www.youtube.com/watch?v=efPrtcLdcdM

The only problem is that it might be banned for spewing too many falsehoods.


Perhaps you are approaching it from a normative POV, whereas the average among the HN crowd is looking at it from a descriptive POV.

In other words, maybe those acquainted with software and AI see the things you mentioned - AI Lawyers and AI developers - as inevitabilities that we will simply have to face. This in turn leads HN'ers to think in terms of entrepreneurship or "how can this make me money in the future?", which means adopting those trends rather than rejecting them, because if you do, someone else will adopt them. Thus, the whole techno-entrepreneurial spirit of this forum leaves little space for viewpoints that offer no technological or entrepreneurial benefit or advancements such as rejecting AI.


> Is this just reflective of our context as people that streamline and automate, or do HN readers just think a human isn't such a complex entity?

Most definitely the former. Humans are most certainly complex. But some of the tasks we perform aren't.


> do HN readers just think a human isn't such a complex entity

Maybe I'm wrong, but it seems like the majority of the comments of this type I see are written by new accounts created minutes before either meant to be throwaways or otherwise.


I think so. There's an undercurrent of thought that what we will be able to create will be vastly superior to what we had before. This results in a perception of past accomplishments being inferior to future accomplishments and a singular focus on making everything conform to these future technologies, whatever means necessary. There needs to be an appreciation of things as they are, and what that enables at that time, so it can inspire the development of future technologies and keep others at bay and let existing systems continue.


Why are Marxists Marxist?

Or said differently: when you have a hammer, everything is a nail.

And our belovèd HN, as amazing and as addictive as it is, is a community by for of “the software developer-entrepreneur” and this by definition, with the hammer of “your mind tries to reduce everything to algorithms” (the personality type which is attracted to writing software just for that very reason!) of course they will do that for humans as well.

Of course, I’d love an HN of poets but that would have the problem of the other extreme: empathy emoting so it would be hard to turn that into clear concise cutting and actionable insights…


I think there is a relatively unique thing about programming that makes them/us believe we know more than we do (not unique to programming, but not every job does this). The thing is we often are a jack of all trades types and work with a large range of domains. This gives us insight into those domains but does not make us experts. That insight can trick us into thinking we understand the domain, but expertise comes from understanding nuance and having an intimate understanding of the vernacular. Programmers are often the middle part of that gaussian meme[0]. Enough knowledge to think you understand a subject but not enough knowledge to really know it. It happens because we're human.

[0]https://i.imgflip.com/5gfpyc.jpg


With AI, we can only ever emulate the intellectual aspect of human understanding, completely forgoing the intuitive aspect, which is equally important. A computer might be able to describe the difference between red and orange using its knowledge of wavelengths, but it won’t understand the difference in that indescribable way, that there simply aren’t words for.


Some star trek!: https://www.youtube.com/watch?v=vjuQRCG_sUw

What makes consciousness? The star trek episode asked is data sentient, but I think the mirror question is just as interesting, how are we anything more than a machine?

We need fuel like a car, there are byproducts to the expenditure of that energy and evidence of chemical reactions, there are all manner of chemical systems that regulate our subjective experiences, measurable and deterministic. We can inject drugs that will turn off the brains ability to form memories...

If there are physical/chemical rules that we are subject to, then it seems like discovering those rules that govern a machine and the rules that govern ourselves are a matter of degree of complexity and not intractability.

If you believe reality is deterministic, then I think don't think this "reductive" view of humans is that far fetched.

In Star Trek it was asked if Data has a soul, but I think it's just as reasonable to ask do we have souls. Do we have something beyond that which can be measured by physics?

Does Human = Machine or does Human = Machine + Soul?

I personally think Human = Machine.

Lunch has an effect on interview results. How "in control" do you think you are?


(Not working in AI.)

I feel like many of these reductive views are expressed in order to provoke unusual thoughts. This is useful.


Current discussions of AI is novel because they now show emergent behaviours.

Reductionist view on humans are still incorrect, but what changed is that AI are viewed with non-reductionist views.

Nobody takes Chomskian views of 'AIs will never be intelligent as they are just statistical parrots' seriously anymore.


A worker can be a very complex human being even as the work they do is simple and easily automated.


100%

I think I understand what OP is getting at, but if an AI lawyer proves to be both cheaper to hire and more effective at defending me... it's a no-brainer IMO.

Then after my AI lawyer and I win my jaywalking hearing or whatever, I can meet up with friends and talk about things like humans do.


> do HN readers just think a human isn't such a complex entity?

I've never gotten a positive comment score for arguing that women aren't just optimizing for wealth, height, and appearance in a partner.


so I've read a couple of screens, the entire conversation circles the main issue here with generative models (I won't get deeper to the AI revolution, singularity or something like that): the old world got old already.

the current systems, the legal one, copyrights everywhere, tons of human systems and processes which are based on human generated text and images (soon, music, video, probably, voices), as some kind of guarantee, just got deprecated simultaneously.

Soon most of them will be targeted by scammers of some kind, exploiting the soft spot of someone or some process requiring or presuming that some output in text or image format, shoud have to be created by a human being.

Kids faking their home assignments are just the beginning, in a year or two, IF the LLMs similar to chatGPT get somehow permanently open to the public, ala Google Search, you'll find every process presuming text or image output as only a human capable task, owned.

Owned like in hacking systems, the society will hack itself to subsume to newer LLM resultant ways of doing stuff, all the legacy societal systems and processes.

Of course, things like copyrights are the canary here, and the status quo has been succesful till now in containing the LLM tsunami of societal change (there you have google, facebook, the chinese giant LLMs, doing nothing publicly with their even more advanced LLMs), but as openAI has demonstrated, the ability to translate LLM outputs to money is incredible.

Maybe the FAANG don't need 10 billions bucks, but for sure is a big planet with many players are just looking how openAI using fairly dated AI tech, has gotten themselves 10 billion dollars of VC, just by publishing some chatbot with autoscaling infra behind.

So it is easy to predict that money will lead the adoption of future LLMs, despite what the FAANG or even state nations can regulate, whoever will end having the upper hand by adopting LLMs or newer AI tech available in the near future, most probably will end changing faster their own societies, giving themselves a too good to discard edge over the rest of the planet.

So things are changing, these LLMs, as simple as they are, could be just like the first submarine cables sending telegraph messages under the Atlantic for the first time. 30 years later and you can't recognize most of the societal processes running into the total system of the world.


What grinds my gears is how easily many people dismiss the unethical exploitation of labour conditions in the global south in order to label training data.

Literal human beings are being subject to the worst filth imaginable and are not compensated fairly nor reasonably protected from the harm. All so that people in rich countries can replace customer service reps and mess around with a chat bot.

It’s like sticking coal miners in a shaft with no safety gear, paying them as little as possible, and getting away with it.


I’m gonna go ahead and guess that they would rather do that than be in a coal mine, with safety gear.


at the risk of potentially exceeding HN's tolerance for content of a spiritual nature:

a human being is a vessel for "consciousness" .. whatever that is.

it's not clear anything else in the universe possesses this quality, except perhaps some other advanced animals.

AI so far has been an interesting statistical optimizer, but clearly lacks this purely human feature.


AGI was here all along, humans were just too self important to realize it. Any network with feedback loops and memory is intelligent, there is nothing special about human intelligence compared to a mouse or a bacteria other than the size of the network and the fact that humans can work together at scale.

The search for AGI will die quickly because of this.


It seems to me HK is passionate about enabling progress via technology. It's a tech board not a philosophy/sociology one and the limits on posting don't allow it to nicely transition to one.

I disagree with a lot of the pro-AI takes here while a being huge fan of AI but have never seen anything malicious or reductionist (other than the required in tech/IT separation from the impact of people losing their jobs because of the tools we create). My development and IT teams displaced many people while impacting exponentially more positively. Tech/IT people have to be dispassionate to that or guilt would prevent us from being effective.

Most of the pro AI people I have disagreed with have strongly humanist reasons for their position and they feel that they are promoting a boon for humanity while I am promoting leeching corporatist IP laws. I feel if we don't have a framework for rewarding creatives we will miss out on a ton of individual contributions that greatly benefit mankind.

As to lawyers, I went through the process. Lawyers are not acting on behalf of a human client. They are acting on behalf of least friction/average best outcomes. They care more about their relationship with a sitting judge and the prosecutors than with their client because those relationships can result in the most good, saving thousands of human years lost to prison, even if it might not result optimally for each individual client. I don't blame my lawyer for optimizing his resources for maximum outcome but I don't pretend that isn't what he is doing. American justice is a meat processing plant not a 'let 100 criminals go free to ensure no innocent man goes to prison' do anything it takes situation.


Surprise: most people in this community have a theologically-impoverished view of mankind. Not sure what you were expecting from an aggregator site run by venture capitalists.


I think the issue here is that many of these things are extremely complicated but _look_ simple. If you aren't in the weeds of that technology you're primed to attach yourself to the simple answer. This is often exacerbated as our technical vernacular overlaps with English, as well as other technical vernaculars. This can make someone believe that they have more understanding of a topic than they really have. This is very obvious in AI/ML research (hint, usually when people say "AI" they aren't a researcher) because there is a lot of hype around the research and products. But I have a great example of this miscommunication on HN from a comment I made yesterday[0]. I said

> [Confidence] indicate[s] how confident the model is of the result, not how likely the prediction is to be accurate.

The problem here is likely how "confidence" and "likelihood" are used. The words are overloaded. Maybe I should have said "not how probable the prediction is" but this could even be less clear. Most people think likelihood and probability are the same thing.

So there's a lot to why this is happening. Misreadings, ego, fooling ourselves, and more. I think there's only a few solutions though. First, we need to recognize that there's nothing wrong with being wrong. After all, we are all wrong. There is no absolute truth. Our perceptions are just a model of the world, not the world[1]. Second, we have to encourage a culture that encourages updating our opinions as we learn more. Third, maybe we don't need to comment on everything? We need to be careful because we might think we know more than we do, especially since we might know more than the average person and want to help this other person understand (but this doesn't mean we're an expert or even right!). Fourth, we need to recognize that language is actually really complicated and that miscommunication is quite frequent. The purpose of language is to communicate an idea from one brain and pass it to another brain. But this is done through lossy compression. Good faith speaking is doing our best to encode in a fashion that is most likely to be well interpreted by our listener's decoder ("speak to your audience" is hard on the internet. Large audiences have a large variance in priors!). Good faith listening is doing our best to make our decoder align with the _intent_ of the speaker's message. Good faith means we need to recognize the stochastic nature of language and that this is more difficult as the diversity of our audience increases (communication is easy with friends but harder with strangers).

I'm sure others have more points to make and I'd love to hear other possible solutions or disagreements with what I've claimed. Let's communicate to update all our priors.

(I know this was originally about ML, which I research, but I think the question was key on a broader concept. If we want to discuss stochastic parrots or other ML stuff we can definitely do so. Sorry if this was in fact a non sequitur)

Edit: I believe we're seeing this in real time in this thread[2]

[0] https://news.ycombinator.com/item?id=34608009

[1] https://hermiene.net/essays-trans/relativity_of_wrong.html

[2] https://news.ycombinator.com/item?id=34619277




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: