Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think it's a bit more subtle than whether it can synthesize, because clearly it can produce seemingly new work.

But what it definitely cannot do is seek new abstractions. It can't be curious about an inconsistency and probe its own knowledge for possible resolutions, or design an experiment that might shed further light in an unknown area. It can't even play a board game after ingesting the rules to it, much less identify contradictions or problems in such a ruleset. And a board game is a tiny microcosm compared with the laws of physics.

It's conceivable that one or more scientists could work in conjunction with an AI to help augment their own abilities, co-pilot style, but I don't think we have a picture yet of what exactly that kind of thing would look like.



I kind of feel like _all it can do_ is seek new abstractions. It’s clustering the data into what you might think of as “concepts,” even though we haven’t made all that much progress on interrogating those concepts directly. I just think clustering concepts by their latent relationships is sort of what abstraction is.


Hmm, interesting. I feel like that clustering/grouping part is like... the first stage on the way to finding an abstraction, but if all you've done is that part of it, then you haven't reasoned about what the abstraction is, or extrapolated out to a new hypothesis, or grappled with the other consequences of the grouping.

Thinking of mathematics, an abstraction is only as valuable as what it can tell us about the underlying system— a laplace transformation into the frequency domain isn't particularly interesting unless we either a) happen to care about some frequency domain property of the function, or b) we can manipulate the transformed function and then un-transform it afterward, yielding a novel insight in the original time domain.

My layman's assessment is that the current state of the art for machine cognition is about on the level of identifying that such a transformation might be possible, but not actually applying it or doing anything about it, much less unapplying it afterward.


I think I get the gist of your point which I would handwavily summarize as “there is no there there,” which I agree with. But I’m thinking that in terms of functionality, there is no real distinction here. Like If I train a model to do video game physics, and reward it for getting the tank shell missile to land on arbitrary targets, then in some sense the system must have an abstraction of gravity or at least parabolic movement or at least a functional equivalent of that, and it must be able to “hypothesize” about the consequences of its model because that’s what it needs to do in order to hit the target, ie, “everything I’ve seen before leads me to believe tank missiles move like so, therefore if I apply the following forces, it should hit the goal”—the degree to which it wins at this task is exactly the degree to which it can form working models of its environment and reason about the cause and effect of the actions it can take. But of course all those words in the previous sentence are wrong because it’s not exactly doing any of that, just something functionally equivalent and automatic.

Maybe we disagree about whether that last point means it can meaningfully “do abstraction” or not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: