Hacker News new | past | comments | ask | show | jobs | submit login

Here’s a simple analogy.

Human: 1 + 1 + 1 + 1 = 4 Machine: 2 + 2 = 4

The machine ‘knows’ the result must be exactly 4. The machine is just finding new ways to arrive at a result that it already knows to be true. But i want the machine to arrive at a result that is hitherto unknown. That for me is AI.




Not this one, but later theorem provers have proved plenty of new results, although probably not many of them are very interesting. Here is a two decade old list [1]. One of the more interesting results is Robbins Conjecture, which was proved by a prover called EQP.

[1] http://www.mcs.anl.gov/research/projects/AR/new_results/


We humans often run into new discoveries by basing it on stuff we already know.

We knew that combustion engines could rotate an axle. We knew that axles could drive a wheel. We knew that wheels turning could drive a cart. Thus, the automobile.

Just like humans, AIs can only solve problems it already understands.


Another way to phrase this is that, in the realm of theorem proving, a human is only going to 'validate' a response from an AI so long as there exists some mathematical or proof theoretic foundation to validate the response from an AI.

To do otherwise is insanity, unless the AI can somehow provably demonstrate that they have invented a better system than all existing mathematics, computation, and logic.

AI has plenty of room to grow in the realm of arts and creativity, because those are places where having a base foundation does not need to be so mechanically reinforced. Toppling all existing knowledge in the arts is fine to do, being that disruptive in the arts is perfectly acceptable and often welcome.

When it comes to the safety of code - expectations of what a machine is going to do, for really important stuff, you want to have a solid, clear, strong reasoning system that is easily understood and easily conveyed to those looking to embrace it. Lots of math and abstraction exists either in the mind of humans, or on a computer, but to take one's hand and smack the whole house of cards down simply because it can't be proven to be absolute perfection - that's a ridiculous thing to do. We know what computers can do because we decide what computers can do. We need to retain some 'ground' in reasoning because what would happen if we collectively stepped away from it all for too long?

42.


> Just like humans, AIs can only solve problems it already understands

This isn't actually true. Problems can be solved by random chance, as long as there are good ways to test the validity of solutions. The most obvious example of this is the evolution of living organisms.

As long as AI can "understand" whether or not its solution solves a certain problem, it can just assemble random solutions and test them. There's obviously an incentive to decrease the randomness as much as possible, but an infinitely powerful computer wouldn't need any such optimization at all.


If you watch videos of AIs learning to run [1] or playing Mario [2], you see that there's definitely "try random shit until your reward function gives you positive feedback" as a method for training AIs.

The difficult part is developing the reward function. There's a lot of intelligence in our hormone-based incentive system - that we feel pain, that we feel sad, etc. Many of those emotions are pre-programmed. If we find a way to design reinforcement systems with the right goals, we can do it like mother nature did it for several billion years. But it's still a ton of computational power required.

If we look at nature as a massively parallelized computational system that defined "just live" as the reward (not because it "wants" it, but because it just emerged), it shows us how much power we would need if we would try to build a completely random process that gains intelligence.

I think your proposed method works ("just assemble random solutions and test them") because we've already seen it in nature. But it takes a lot of time and energy, regardless of our computational power. I think we're faster when we find the right boundaries and reward functions instead of randomly trying stuff.

[1]: https://youtu.be/gn4nRCC9TwQ [2]: https://youtu.be/qv6UVOQ0F44


> If we look at nature as a massively parallelized computational system that defined "just live" as the reward (not because it "wants" it, but because it just emerged), it shows us how much power we would need if we would try to build a completely random process that gains intelligence.

With that definition I can't see how artificial life created would be any much different than the behavior of a computer virus.

> I think we're faster when we find the right boundaries and reward functions instead of randomly trying stuff.

Boundaries and reward functions in human society are part of the 'human social organism' that allows each individual human to function with autonomy in a fashion that is collaborative, aligned with our developed value systems, and allows us to live with stability, security, faith (be it in some sense of wonder, divinity, in each-other - doesn't matter), independence - and these things are base needs, no matter what variation they manifest with. Boundaries are redefined when there is conflict, and the less violent and destructive the conflict, the better the chances are for these boundaries and reward functions to continue functioning as they have been developed (rather than being obliterated in entirety, requiring them to built from scratch).

It makes us faster, but it's also us standing on the shoulders of giants. And I think it's important to question what things are worth applying random solutions to, and what things require deep contemplation. It seems very paradoxical, to try to define something that both is a function of the system it exists in, and also something that could potentially break the whole system. But, creation, destruction, clearly an oversimplification.

I do know that what looks random to one person is not necessarily random to another. This goes back to how the context is defined, how society is defined, boundaries and reward functions established a priori.

Emotions can be simple. The problem is that a computer already has them. Computer produces wrong solution, algorithm dies. Computer produces right solution, algorithm survives. We don't give the computer words to express itself about this because we never taught it to do that. What would happen if we did?


> try to define something that both is a function of the system it exists in, and also something that could potentially break the whole system

That is what's so damn hard about shaping societies (and the market). The recursive feedback loop and the self-interference.

It's also on individual levels: We're able to dynamically adapt our reinforcement system using meta-cognition (based in the prefrontal neocortex).

I understand the other parts of the answer, but I can't really see what you try to say with them (e.g. boundaries, society and deciding if we apply randomness or not).


I tend to be of the mindset that over the long term, you are going to get a good perspective if you approach the problem 50/50. Apply randomness half the time, apply the knowledge you know the other half. Divide and conquer, sort of.

Randomness potentiates the space that you will both be able to identify error and consequently, find mistakes, find errors in reasoning (Monte Carlo simulations are traditionally used for this sort of thing). The other good thing with randomness as well, find ways to see errors as being 'not errors', e.g. a tool that can be developed, structured, a new way to see the problem, a creative approach. Some melding between the two seems to be something of significance for an AI. So you don't want it to be all random, because you need stability, structure, you need a 'language' or an 'awareness' you understand that isn't so chaotic and constantly changing that you can't even find a place to put your feet on the ground.

It doesn't have to be a perfect 50/50 balance, because that has it's own set of problems - divide and conquer all the divisions and you still wind up with 2^n newly defined problem spaces to interpret, possibly losing sight of the bigger picture or maintaining independent direction in one focused lineage of all those spaces. Just very generally, maintain balance, because the world is chaos sometimes.

It's like a stream of information. All the analysis to all of that space is meaningless if the context changes enough. So, adapt to a new context.

Honestly though, I don't know what I'm trying to say much of the time, aside, 'help', lol. Life is terrifying. :)


> Honestly though, I don't know what I'm trying to say much of the time, aside, 'help', lol. Life is terrifying. :)

from another comment made by you:

> Evolution does not care if an asteroid hits the earth and wipes out all life as it presently exists.

My advice: Stop worrying. Enjoy the randomness :) Maybe book a flight to Asia. Life is short. Embrace uncertainty. Sell luxurious sanitary pads to rich women in their 40s. Dress well and be funny. Then suddenly change your mood to sexy, ask a stranger for a kiss. I now write random love letters to my ex-girlfriends. Let's see where randomness will lead us to. See you on the other side of existence.


Found the boundaries pretty early. Wearing nothing but socks and singing "Why does it hurt when I pee?" while trying to cross a freeway is not considered "appropriate in the public". shmocks everywhere.

I'm in jail now. I'm free now. But a little cold.


Evolution does not care if an asteroid hits the earth and wipes out all life as it presently exists. It doesn't care of the moral, ethical, or philosophical dilemmas when it decides how to recycle one living organism into another. It doesn't have the foresight to know how one solution may affect everything versus another. It just picks whatever seems to work based on the context given. If the context is described incorrectly, it doesn't care if the unthinkable happens and the whole system collapses. That just turns into the present understanding of the system, but it won't matter if there's no one left to listen.

We don't have infinitely powerful computers.


I would however argue that if you can tell the difference between a good and a bad solution of the problem (through some utility function, or fitness function for evolutionary algorithms) you do have some understanding of the problem.

But, yes, I agree that you could just arrive at the correct solution through random chance. I guess a better description is that to solve a problem we either need an understanding of the problem or enough resources and time to exhaust the solution space.


But i want the machine to arrive at a result that is hitherto unknown.

Like within a subset of problem solving here: 1 + 1 = 2?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: