Hacker News new | past | comments | ask | show | jobs | submit login

> the fundamental approach to problem-solving is evolution itself

Not quite. The fundamental approach to problem solving is search, and evolution is a kind of search. It's not a particularly efficient search algorithm, but it has the interesting property that it is able to invent new search algorithms that are more efficient than it is. We have a pretty good handle on search. What we do not yet have is the foggiest clue about this sort of "meta-search", the search for new kinds of searching mechanisms. We've invented/discovered one: the Turing machine, and everything we've done since then has been a riff off that one idea. Even so called "neural networks" are really just a fairly uninteresting algorithm that happens to bear a superficial resemblance to our brains and happens to produce some interesting results. But, as you very correctly observe, they are nowhere near the Right Answer.

UPDATE: OK, quantum computers probably count as a mechanism distinct from TMs. But I'll give long odds against QC turning out to be the key to understanding how our brains work, Roger Penrose notwithstanding.




> The fundamental approach to problem solving is search, and evolution is a kind of search

Ehh, problem-solving is search (searching for an answer), so it this statement doesn’t really make sense except as a tautology. I agree that evolution is a core means by which problems are solved, and perhaps the core means.

Humans are incredibly good searchers because of the evolution we’ve undergone, to the extent we’ve started evolving our own searchers to enhance our problem solving ability. I’m not sure if we can reasonably say that those searchers have started to evolve their own searchers, but it’s safe to say it will happen soon.

The big question to me is what beings evolved us (the universe/whatever the embedding we call reality really is), and what problem were we meant to solve? This is basically unknowable as far as I can tell, and the foundation of all “religion”.


Not all problem solving is search and not all search is problem solving. Some problems can be solved algorithmically with no search (e.g. computing a square root) and search can obviously be done in service of something other than problem-solving (unless you take the meaning of "problem" to be so broad as to apply to just about any activity).

I also intended the word "search" to be understood in a narrow technical sense:

https://en.wikipedia.org/wiki/Search_algorithm

[EDIT] Just to clarify, because I just realized that these two statements might seem contradictory:

"The fundamental approach to problem solving is search"

"Not all problem solving is search"

All problem solving can be cast as search, but some problems have heuristics that allow the correct branch of the search space to be chosen the first at every decision point with no backtracking required. The "search" becomes degenerate because no backtracking is ever required. The word "search" is generally reserved to apply only to algorithms that don't have such strong heuristics and do require backtracking.


I fear this is just a debate over semantics, which is always an unhappy debate. In my opinion “finding the square root” isn’t a “problem”, perhaps precisely because it doesn’t involve search. Something like “how can I determine the side length of this square I needed 67 sq feet of tile to cover” is a problem: I could use your algorithm on paper, I could guess based on prior experience, I could measure, I could estimate, I could use a calculator, etc. Which of those I pick is a search based on the constraints at hand (how much time do I have, what resources do I have, how precise an answer do I need, etc), and the act of considering many approaches to the problem and proceeding with some based on the constraints at hand is evolution.

Again, this is all semantics imo, so not really a debate so much as sharing world views. Which perhaps is what debate is. Hm.


Oh man, reading what you wrote out, it just occurred to me that learning is actually caching.

We already have a multitude of machines that can solve any problem: the global economy, corporations, capitalism (Darwinian evolution casted as an economic model), organizations, our brains, etc.

So take an existing model that works, convert it to code made up of the business logic and tests that we write every day, and start replacing the manual portions with algorithms (automate them). The "work" of learning to solve a problem is the inverse of the solution being taught. But once you know the solution, cache it and use it.

I'm curious what the smallest fully automated model would look like. We can imagine a corporation where everyone has been replaced by a virtual agent running in code. Or a car where the driver is replaced by chips or (gasp) the cloud.

But how about a program running on a source code repo that can incorporate new code as long as all of its current unit tests pass. At first, people around the world would write the code. But eventually, more and more of the subrepos would be cached copies of other working solutions. Basically just keep doing that until it passes the Turing test (which I realize is just passé by today's standards, look at online political debate with troll bots). We know that the compressed solution should be smaller than the 6 billion base pairs of DNA. It just doesn't seem like that hard of a problem. Except I guess it is:

https://github.com/opencog/opencog


> “finding the square root” isn’t a “problem”, perhaps precisely because it doesn’t involve search

Well, it can be solved by search (as I've defined it). It just happens that it doesn't have to be.


In my own life, I've come to view the search for meaning as the fundamental goal of consciousness. My good friend calls this the "reconnection" and I've often wondered if connectedness could be used as the primary incentive for a general machine learning algorithm.


You can look at a genetic algorithm as a neighbourhood search, where the neighbourhood is defined by whatever you can reach via 1 mutation. crossover can be seen as a way to avoid being stuck near a local optimum.


That's not quite fair to evolution: crossover is vastly more important to real world evolution than mutation (even as mutation is critical to create the raw material that's crossed).

I don't think I ever have seen a GA application that made prior use of crossover, because most use genotype->phenotype codings that can't even in theory make it useful. That's where anyone young starting on this road should start.


viruses disagree about crossover.


Ya I was just thinking that the simplest learning algorithm would actually be something simpler like "solving systems of equations as a matrix" (I'm feeling lucky Google result):

https://www.mathsisfun.com/algebra/systems-linear-equations-...

So you enter your inputs and outputs in a big table and solve by brute force to get an algorithm. It might not be an interesting one, or one useful beyond that one problem, but it's something.

There are also some graphical techniques for truth tables (boolean-value matrices):

https://www.allaboutcircuits.com/textbook/digital/chpt-8/log...

And this is kinda sorta loosely related to classification algorithms like k-means clustering, at least that's how my brain associates them:

https://en.wikipedia.org/wiki/K-means_clustering

Anyway, I view neural networks as a way of solving a huge matrix with gradient descent (hill climbing) as the optimization technique. It fundamentally will always struggle with the local minimum (maximum) problem just like in calculus.

Whereas genetic algorithms bust out of that mental trap via random sampling. I'm sure there's a proof out there somewhere that says they can solve any problem, given enough time, which is why I view them as the basis of learning. As I write this with my neural net hmmm..

To me, the only way forward is at the meta level, quantum computing or similar (as you very astutely pointed out).

An analogy for this might be that the vast majority of us do our daily work on the imperative side of programming (Turing machines) without realizing that they can be made equivalent to the vastly simpler functional side of programming (Lambda calculus) that solved most of the problems we face today 50+ years ago.

The state of the art when I first starting following this in the late 90s was optimizing neural nets evolved via genetic algorithms. I never heard anything else about it, but keep in mind that shortly after that, HP bought Compaq, private industry research spending cratered, native mobile app programming buried the innovation of declarative and data-driven programming that the web popularized, single page apps made web development untenable, video cards ended CPU innovation, and wealth inequality directed us all into nursing CRUD apps back from the brink of death instead of making contributions to open source projects that could fundamentally advance the human condition.

The problems are so widespread and entrenched now that if we can even break past a few of them, then I see rapid and transformative change potentially happening very quickly. It's like how the forces that be work to make sure that nobody gets UBI, because we can't have the plebeians enjoying the same return on investment that the wealthy take for granted. No, everyone must be kept poor and struggling or else they might just find the time to automate themselves out of a job.

Edit: sorry I just realized that I contradicted you on search. I was trying to make the point that if you could fit the whole search space into a matrix, then we could just solve it. Since that's not possible for most real-world problems, then yes I agree with you that search is actually the fundamental problem-solving algorithm.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: