Hacker News new | past | comments | ask | show | jobs | submit | organsnyder's comments login

If AI was just reading, there would be much less controversy. It would also be pretty useless. The issue is that AI is creating its own derivative content based on the content it ingests.

Isn't any answer to a question which hasn't been previously answered a derivative work? Or when a human write a parody of a song, or when a new type of music is influenced by something which came before.

This argument is so bizarre to me. Humans create new, spontaneous thoughts. AI doesn’t have that. Even if someone’s comment is influenced by all the data they have ingested over their lives, their style is distinct and deliberate, to the point where people have been doxxed before/anonymous accounts have been uncovered because someone recognized the writing style. There’s no deliberation behind AI, just statistical probabilities. There’s no new or spontaneous thoughts, at most pseudorandomness introduced by the author of the model interface.

Even if you give GenAI unlimited time, it will not develop its own writing/drawing/painting style or come up with a novel idea, because strictly by how it works it can only create „new” work by interpolating its dataset


This argument is so bizarre to me.

There is no evidence whatsoever to support that humans create "new, spontaneous thoughts" in any materially, qualitatively different way than an AI. In other words: As a Turing-computable function over the current state. It may be that current AI's can't, but the notion that there is some fundamental barrier is a hypothesis with no evidence to support it.

> Even if you give GenAI unlimited time, it will not develop its own writing/drawing/painting style or come up with a novel idea, because strictly by how it works it can only create „new” work by interpolating its dataset

If you know of any mechanism whereby humans can do anything qualitatively different, then you'd have the basis for a Nobel Prize-winning discovery. We know of no mechanism that could allow humans to exceed the Turing computability that AI models are limited to.

We don't even know how to formalize what it would mean to "come up with a novel idea" in the sense you appear to mean, as presumably, something purely random would not satisfy you, yet something purely Turing computable would also not do, but we don't know of any computable functions that are not Turing computable.


This argument, by now a common refrain from defenders of companies like OpenAI, misses the entire putative point of intellectual property, and the point of law in general. It is a distraction of a common sort - an attempt to reframe a moral and legal question into an abstract ontological one

The question of whether the mechanism of learning in a human brain and that in an artificial neural network is similar is a philosophical and perhaps technical one that is interesting, but not relevant to why intellectual property law was conceived: To economically incentivize human citizens to spend their time producing creative works. I don't actually think property law is a good way to do this. Nonetheless the question when massive capital investments are used to scrape artists' work in order to undercut their ability to make a living from that work for the benefit of private corporations that do not have their consent to do this is whether this should violate this artificial notion of intellectual property that we have constructed for this purpose, and in that sense, it's fairly obvious that the answer is yes


I wasn't responding to a moral and legal question. I was responding to a comment arguing that humans are some magical special case in nature.

If you want to argue it's a distraction, argue that with the person I replied to, who was the person who changed the focus.


Yea I'll give you that. But many people seem to have the argument you've made - which is dubious on its own terms, by the way, as we don't really have a complete picture of human learning and the assumption that it simply follows the mechanisms we understand from machine learning is not a null hypothesis that doesn't demand justification - loaded up for these conversations, and it needs to be addressed wherever possible that the ontological question is not what matters here

> which is dubious on its own terms, by the way, as we don't really have a complete picture of human learning and the assumption that it simply follows the mechanisms we understand from machine learning is not a null hypothesis that doesn't demand justification

The argument I made in no way rests on a "complete picture of human learning". The only thing they rest on is lack of evidence of computation exceeding the Turing computable set. Finding evidence of such computation would upend physics, symbolic logic, maths. It'd be a finding that'd guarantee a Nobel Prize.

I gave the justification. It's a simple one, and it stands on its own. There is no known computable function that exceeds the Turing computable, and all Turing computable functions can be computed on any Turing complete system. Per the extended Church Turing thesis this includes any natural system given the limitations of known physics. In other words: Unless you can show knew, unknown physics, human brains are computers with the same limitations as any electronic computer, and the notion of "something new" arising from humans, other than as a computation over pre-existing state, in a way an electronic computer can't also do, is an entirely unsupportable hypothesis.

> and it needs to be addressed wherever possible that the ontological question is not what matters here

It may not be what matters to you, but to me the question you clearly would prefer to discuss is largely uninteresting.


> In other words: As a Turing-computable function over the current state.

You need to be a bit more expansive. Turing-computable functions need to halt and return eventually. (And they need to be proven to halt.)

> We know of no mechanism that could allow humans to exceed the Turing computability that AI models are limited to.

Depends on which AI models you are talking about? When generating content, humans have access to vastly more computational resources than current AI models. To give a really silly example: as a human I can swirl some water around in a bucket and be inspired by the sight. A current AI model does not have the computational resources to simulate the bucket of water (nor does it have a robotic arm and a camera to interact with the real thing instead.)


> You need to be a bit more expansive. Turing-computable functions need to halt and return eventually. (And they need to be proven to halt.)

This is pedantry. Any non-halting function can be decomposed into a step function and a loop. What matters is that step function. But ignoring that, human existence halts, and so human thought processes can be treated as a singular function that halts.

> Depends on which AI models you are talking about? When generating content, humans have access to vastly more computational resources than current AI models. To give a really silly example: as a human I can swirl some water around in a bucket and be inspired by the sight. A current AI model does not have the computational resources to simulate the bucket of water (nor does it have a robotic arm and a camera to interact with the real thing instead.)

An AI model does not have computational resources. It's a bunch of numbers. The point is not the actual execution but theoretical computational power if unconstrained by execution environment.

The Church-Turing thesis also presupposes an unlimited amount of time and storage.


Yes, that's why we need something stronger than the Church-Turing thesis.

See https://scottaaronson.blog/?p=735 'Why Philosophers should care about Computational Complexity'

Basically, what the brain can do in reasonable amounts of time (eg polynomial time), computers can also do in polynomial time. To make it a thesis something like this might work: "no physically realisable computing machine (including the brain) can do more in polynomial time than BQP already allows" https://en.wikipedia.org/wiki/BQP


If people were claiming that a computer might be able to, but will be to slow, that might be an angle to take, but to date, in these discussions, none of the people arguing that brains can do more have argued that they're just more efficient, but that they inherently have more capabilities, so it's an unnecessarily convoluted argument.

>Humans create new, spontaneous thoughts I don't believe we do; just look to media, very few plot-lines in Movies/TV are little more than "boy meets girl Pocahontas".

And if you say that a model could not create anything new because of it's static data set but humans could...I disagree with that because us humans are working with a data set that we add to some days, but if we use the example of writing a TV script, the writer draw from their knowledge (gained thru life experience) that is as finite as a model's training set is.

I've made this sort of comment before. Even look to high fantasy; what are elves but humans with different ears? Goblins are just little humans with green skin. Dragons are just big lizards. Minotaurs are just humans but mixed with a bull. We basically create no new ideas - 99% of human "creativity" is just us riffing on things we know of that already exist.

I'd say the incidences of humans having a brand new thought or experience not rooted in something that already exists is very, very low.

Even just asking free chat gpt to make me a fantasy species with some culture and some images of the various things it described does pretty well; https://imgchest.com/p/lqyeapqkk7d. But it's all rooted in existing concepts, same as anything most humans would produce.


> Humans create new, spontaneous thoughts.

The compatibility of determinism and freedom of will is still controversially debated. There is a good chance that Humans don’t „create“.

> There’s no deliberation behind AI, just statistical probabilities. There’s no new or spontaneous thoughts, at most pseudorandomness introduced by the author of the model interface.

You can say exactly the same about deterministic humans since it is often argued that the randomness of thermodynamic or quantum mechanical processes is irrelevant to the question of whether free will is possible. This is justified by the fact that our concept of freedom means a decision that is self-determined by reasons and not a sequence of events determined by chance.


> The compatibility of determinism and freedom of will is still controversially debated. There is a good chance that Humans don’t „create“.

Determinism and free will are pretty irrelevant here.

Unless P=NP, there's no way for us to distinguish in general between eg pseudo random systems and truly random systems from the outside.

Btw, I don't think determinism in humans/AI has anything to do with deliberation.

The newest AI models are allowed to deliberate. At least by some meanings of the word.

> This is justified by the fact that our concept of freedom means a decision that is self-determined by reasons and not a sequence of events determined by chance.

Well, different people have different definitions here. None of them very satisfying.


> Determinism and free will are pretty irrelevant here.

No. It’s the other way around. Free will is the basic for „creating something new“.

> Btw, I don't think determinism in humans/AI has anything to do with deliberation.

With determinism there is no deliberation.


> With determinism there is no deliberation.

As far as we can tell, all the laws of the universe are completely deterministic. (And that includes quantum mechanics.) As far as we can tell, human beings obey the laws of physics.

(To explain: quantum mechanics as a theory is completely deterministic and even linear. Some outdated interpretations of quantum mechanics, like Copenhagen, use randomisation. But interpretations don't make a difference to what the underlying theory actually is. And more widely accepted interpretations like 'Many Worlds' preserve the determinism of the underlying theory.)

Btw, neural nets are typically sampled from, and you can use as good a random number generator (even a physical random number generator) as there is, if you want to. I don't think it'll change what we think neural nets are capable of.


That's exactly their point (and mine), with respect to the person above arguing humans unlike AI can create "new things". For that distinction to make sense "new things" must be interpreted as "something that can't be deterministically derived from the current world state", as they're trying to create a distinction between a purely deterministic algorithm and human consciousness.

Paramount+ on iOS was terrible the last time I used it, too. I tend to binge Star Trek on flights, so I like to download a bunch of episodes. Paramount+ had such a terrible experience (at least 10% of the time videos would be downright corrupted), I ended up cancelling my standalone subscription and getting it through Apple TV so I could use the Apple TV app.

The Apple TV app is 100% the only way to cope with Paramount+. Of all the streaming services I use regularly -- Netflix, Hulu, Disney+, Peacock, YouTube -- it's the only one that doesn't work for me more often than not when using the app directly.

If they don't find enjoyment in a particular request, they are free to turn them down.

GDP isn't the only measure of a country's success. European countries surpass the United States in many metrics relating to the actual wellbeing of their citizens.

Which European countries specifically? Europe is a big place. Are you including Moldova? Or just the nice parts of the EU?

You can cherry pick measures of success to make one country look better than other countries. So what. For example, the USA does better than almost all European countries in 5-year cancer survival rates.


Or you can dig into the details, as Politico did https://www.politico.eu/article/cancer-europe-america-compar...

and found: The reason the U.S.’s strong performance on cancer comes as a shock is because access to care in the country is notoriously unequal. But, it turns out, that's far less true of the elderly.

Age 65 is when virtually everyone in the U.S. qualifies for Medicare — America’s national, taxpayer-subsidized, government-run (dare we say socialized), comprehensive health insurance program.

in other words, the data you are arguing with shows that EU style socialized medicine produced better outcomes.


Although Medicare isn't cheap if you currently (or recently) had a high W-2 income. I pay about what I would for a private plan (or COBRA) for now.

Presumably you can afford it if your income is that high. While there are arguments to be made that it shouldn't be means-based at all (it destigmatizes it, for one), it does seem to be working as intended.

It's not unreasonable for a plan that the government is always trying to chip away at for financial reasons. But a lot of people look at Medicare and think that it's essentially free which it decidedly is not for anyone with a current or recently high salary. (The add-ons cost a bit too but relatively not.)

My wording was sloppy: I should have said "...many European countries..." or just "many countries", for that matter. My point is that the level of GDP in the United States is not required for a thriving society.

What is the minimum GDP per capita required for a thriving society?

This is a bad-faith question and you know it.

I know no such thing. A high GDP per capita might not be required to have a thriving society in the modern world but surely there is a lower limit. It should be possible to at least roughly quantify that limit. Could we still have a thriving society if GDP was cut to, let's say, $10000 per person?

I don't have that answer. But I see no evidence that the United States benefits—as a society—from our high GDP.

Would we be better off with a lower GDP?

I don't know. But we certainly would be better off if we optimized for other metrics besides GDP.

Which other metrics should we optimize for, and how should those metrics be weighted relative to each other?

was Moldavia included in the grandparent's comment on labor laws?

re: cancer survival. Are you referring to this work from 2020, showing that the US has more cancer than the EU, even though odds of survival are better?

The US Has Higher Incidence, Survival of Rare Cancers Compared With Europe (2020) https://www.ajmc.com/view/the-us-has-higher-incidence-surviv...

and shows that Age-adjusted incidence for all rare cancers combined was 17 percentage points higher in the United States than in Europe. The 5-year net survival for all rare cancers was significantly higher in the United States compared with Europe (54% vs 48%).

so 17% more likely to have it and a 6% increase in survival.

Can we talk about cherry picking?


Right, which was exactly my point. By cherry picking statistics you can make any developed country look better than any other developed country. So what.

> You can't plug the flue rank into the diapason for frequency modulation, or change the envelope, etc.

"String" sounds are implemented by having two ranks tuned slightly different from each other. They're not plugged "into" each other, but they're definitely working together to produce a certain sound.

Anna Lapwood is a very exciting organist. She has the social media skills to build a huge following, enjoys exploring all facets of the instrument in both traditional and non-traditional contexts, and is an exceptional musician.


>"String" sounds are implemented by having two ranks tuned slightly different from each other.

The effect you describe is called "voix celeste" (or just "celeste") on pipe organs. But it's common to have "string" stops without the detuning. These are just open flue pipes with unusually narrow scaling, which causes them to produce lots of upper harmonics and weak fundamentals, like real bowed string instruments.


Piano strings are also detuned. The ensemble sound of a string section also comes from small detunings (and timing differences). I don't see that as a characteristic feature of a synthesizer.

Totally agree on Anna Lapwood.


What a beautiful instrument! When I saw the title I assumed this would be another instrument cobbled together from various decommissioned instruments. This is a much more interesting project!

Comcast's technical side does a lot of interesting work: they were one of the first big ISPs to roll out IPv6, for instance. Not saying they're on the whole a benevolent (or even trustworthy) corporation, but these sorts of things don't need to warrant immediate suspicion.


I backed a Pebble Time Round Gold on the last Kickstarter, which was cancelled when Pebble was acquired. Somehow the devices ended up on Amazon, and I snagged one there. It's a phenomenally stylish device.


My white 20mm one with a brown leather band started more conversations than anything else I’ve ever owned and used, which was surprising but neat :)

Unfortunately mine got stolen and broken, which is a shame. I wonder how hard they are to buy today, and how difficult battery replacements for them are…


Battery replacement is difficult, not impossible, but bordering on impossible is restoring the waterproof seal. Once you've replaced the battery you basically have to keep it away from water. I wasn't careful with mine and lost two to water damage after replacing the battery, despite re-sealing with permanent adhesive.


Good to know! IIRC the Round didn't have as high a water resistance rating to begin with, and I typically wear a leather band, so this might not be a deal-breaker for me.


The PTRs weren't diving watches for sure, but the original waterproofing was easily good enough to withstand submersion, as long as the battery hadn't started swelling yet.


Ah that is excellent to know, I appreciate it!


The advantage of a watch that only needs to charge every few weeks is that it likely doesn't need a lot of time to top it off if you're charging it daily (unless it has a massive battery or only charges extremely slowly). When I wore Pebble watches, charging them while I was in the shower was plenty of time to last until I showered the next day. With my Apple watches, I've gotten into the habit of not wearing them for a while in the evening as well as my morning shower.


indeed, I try to do that when I remember with my Garmin Venu 2 (~11 day battery life). Otherwise, I charge it on weekends while I'm home doing nothing.

It's also nice because most trips I take are under 11 days so I don't need to remember to pack the charger.


That doesn't work for broad societal problems.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: