Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Darwin, Machines, and the Future: A conversation with George Dyson (nfx.com)
71 points by domrdy on Aug 4, 2020 | hide | past | favorite | 43 comments


The lineage of technological progress does not follow the same path as commercial development of a pre-existing technology applied to a pre-existing market. We need to be careful not to equate the two. As much as a D2C brand or SaaS startup may brilliantly bring a new market innovation to the fore, they rarely bring new technological innovations--and the potential to expand the pie rather than extract market value--with them. Technological progress is a halting, stop-and-start process, and the market isn't always welcoming to it even in cases when the economics of the innovation make sense. So, yes--if you're working on truly progressive technology, take this sense of purpose the author speaks of to heart. But tech as an industry shouldn't appropriate this purpose when it cannot follow through on it.


I respectfully disagree. New technologies don't intrinsically expand the pie, lowering the price/performance ratio does. D2C and SAAS lower the cost of distribution which historically been a significant markup on goods. The consumer pays less and thus have more money to spend on other goods and services. Therefore the pie expands.

Examples of technologies failing to change price/performance:

1) Self driving cars: can you replace people working for less than minimum wage with the expensive hardware and liability related costs? Eventually yes but even airplanes are struggling and that is a far easier problem.

The counter example might be closer to Tesla's auto pilot.

2) DC electrical grid (as implemented by Edison): expensive to transmit over distance relative to AC.

3) Fuel cell cars: the platinum catalysts drove up the price. (There were other factors as well).

Examples of technologies succeeding to change price/performance:

1) Light bulb: factories could be productive even at night. Since most of the costs were fixed, getting more productivity was invaluable.

2) CPU's: Moore's law

3) Solar: Manufacturing costs were the real obstacle, and improved manufacturing and economies of scale led to an economically viable product.

I don't claim to be perfectly right about all these cases but I think they illustrate the gist of what I'm saying.


> As Dyson says, we’re in the middle of a Black Swan

It’s interesting to note that Nassim Taleb who coined the term “black swan” disagrees that the current pandemic qualifies. As he points out, not only was this predictable, it was also inevitable. We have literally made movies portraying it


Also DARPA - https://www.washingtonpost.com/national-security/how-a-secre...

There were comments in HN predictions threads for 2020/for the next decade etc talking about pandemics. The hive mind belches out some interesting stuff once in while.


Not to mention Gates who warned of this barely five years ago

https://www.ted.com/talks/bill_gates_the_next_outbreak_we_re...


Agreed. Same with big fires or earthquakes in California. We know they are coming. We are just hoping they will come a little later.


From the article: "In terms of a distributed artificial intelligence, Dyson is a believer. Because with a distributed system, you have an opportunity for evolution, for it to find itself and to learn on its own."

I published a paper in this year's AGI conference that would certainly support this line of reasoning. In my paper, "AGI and the Knight-Darwin Law", I argue that if an AGI single-handedly creates a child AGI with no outside assistance, than the child is necessarily less intelligent than the parent. Thus if machines are going to create more intelligent machines, it's necessary for the creating machines to collaborate in order to do so. This closely parallels a law proposed by Darwin, called the Knight-Darwin Law, a cornerstone of his Origin of Species. The KDL states that it's impossible for one organism to asexually produce another, which asexually produces another, and so on forever; sexual reproduction is necessary or the line must terminate. (Darwin was of course well aware of seemingly-asexual species. His motivation for postulating the KDL was the observation that seemingly-asexual species do rarely sexually reproduce, e.g. if a rainstorm damages the part of a flower that would otherwise isolate its stamen, etc.)

Here is the paper: https://philpapers.org/archive/ALEAAT-10.pdf


What about the trivial case of the AGI simply self-replicating? Then you have a child, it is identical, and is therefore of identical intelligence (however you measure that), and therefore it is not necessarily less intelligent.

What if the AGI is intelligent enough to randomly mutate offspring here and there? Then one might be more intelligent, and again it is not necessarily less intelligent.

What if the AGI is intelligent enough to look at its own code, identify some kind of bias in its programming (not unlike human introspection), and produce a child without that bias? That child would then be better able to model reality -- a core element of intelligence -- and again you've got a smarter child.

I have a difficult time buying the "necessarily" part.


I have a difficult time buying the "necessarily" part.

Yes. That's borrowed from an old theological argument.

Especially since much of the progress in machine learning is just putting more hardware and data on the same algorithms.


Could you clarify what you mean when you say "That's borrowed from an old theological argument"?


I think a variation of the argument is: You couldn't simulate the universe in the universe because you would need at least as many resources as the universe has but are limited to resources within the universe (unless you want to get hacky).

I assume they are implying that this thought process may not apply.


One of the neat things about the argument in my paper is that it doesn't involve the parent AGI actually running the child AGI. It merely involves the parent writing source-code and knowing that IF it had enough resources to run the source-code, then the result would be a truthful AGI child.


Sigh. Here, from a creationist site.[1] There's an argument that the second law of thermodynamics prevents evolution, because evolution decreases entropy.

Having a star around to power things helps beat that problem.

[1] https://christiananswers.net/q-eden/edn-thermodynamics.html


The only similarity between that and my argument is that both involve a measure decreasing. Are you saying that one fallacious argument involving a decreasing measure invalidates all arguments ever that involve (completely unrelated) decreasing measures?

As is so often the case in academia, the basic idea is hidden by all the academic jargon. The basic idea is quite simple. Suppose you roll up your sleeves, get to work, and announce to the world: "Turing machine #57835 is an AGI with truthful knowledge". Then, without even needing to run Turing machine #57835 (which might be prohibitively expensive), just as a purely logical consequence of your announcement, you can conclude that the following ordinal number A is a computable ordinal: "The supremum of all ordinals B such that B has some code k such that Turing machine #57835 knows that k is a code of a computable ordinal". By construction, this ordinal A, for which you concretely know a code, is larger than any ordinal for which Turing machine #57835 knows a code. So in at least one extremely specific technical sense (the sense of knowing codes for large ordinals), you are more intelligent than Turing machine #57835.

Which part of this argument is unclear?


You answer the question, but why the sigh? Shouldn't relevant clarifications be encouraged?


Right, I should have clarified that I meant: create a child in such a way as to know the child's truthfulness and source-code. Of course, without that requirement, a monkey could blindly type up an AGI source-code on a typewrite by sheer dumb luck. As for an AGI simply self-replicating, that is possible but then the AGI would not know the child's truthfulness and source-code, since an AGI cannot know its own truthfulness and source-code.


Still, if it is identical, then it could not necessarily be less than.

I may not know what value is in the box (my own truthfulness and source-code), but if it is replicated, then the value ought to be identical, and therefore not less than.


Right, but the machine which prints its own source-code does not do so in such a way as to know the child's truthfulness (because the machine cannot know its own truthfulness and source-code, assuming the machine itself is truthful and satisfies basic assumptions about how it can reason).

I could design a program that systematically prints out all possible source-codes, and then try to claim credit for designing the first AGI (because, at least if AGI is possible at all, my brute-force source-code-printer will eventually print it). But that would be silly because I wouldn't actually know which of the many many outputs from my printer is the actual AGI source-code :)


I think the "knowing" is irrelevant.

If I have two boxes, and you tell me that the contents, the value of box one is identical to box two, then I do not need to know the value of the box to know that box two is not less than box one. Here is a pointer to a section of memory; copy it.

As long as the copy operation is successful, you do not need to know the value of what you have copied to know that it is identical, and therefore not "less than."

Hence, if you self-replicate, then it is not "less than;" and if you cannot know that two things are identical, then you cannot self-replicate.

I think you are stuck saying that your AGI cannot self-replicate if you want to hold onto "necessarily less than." It's just a basic principle of equality, you cannot get around it.


I think we're talking past each other. My claim is that if an AGI single-handedly creates a child AGI, in such a way as to know the child's source-code and truthfulness, then the parent is more intelligent than the child (in a certain technical sense).

You rightly point out that an AGI can duplicate itself (if it knows its own source-code). This doesn't contradict my claim, because it doesn't satisfy the premise. It's like if I give you a box of candies and I say, "Every blue candy in this box is round", and you pull out a red candy which is square, you can't say my statement was false because that red candy isn't round. To falsify the statement you'd have to produce a blue candy that isn't round.

The AGI which self-duplicates would only falsify my claim if it self-duplicates in such a way as to know the duplicate's source-code and truthfulness. Assuming said AGI is truthful and satisfies certain assumptions about how it can reason and know, then it can't know its own source-code and truthfulness, so it doesn't falsify my claim.


I don't think we are talking past each other, I think we're having a debate as to whether or not methods of replication have some impact on whether or not two things are identical, and therefore have the same value.

Imagine a simple single-celled organism. It is well-suited for its environment and so forth. It reproduces asexually without error. Now, it isn't terribly bright, it hasn't developed PCR or anything, it hasn't had a Watson or a Crick, so it doesn't "know" its source code, but it has definitely replicated and done so to produce an identical copy. I'm not even sure what truthfulness means in this context, but I can confidently say that identical things have the same value, and therefore the offspring organism not not less than the original.

The "in such a way" has an impact in the way you're viewing it, and to me, I do not understand that. Identical is identical. I don't have any other way of looking at that. If the "in such a way" does have some kind of effect, then they are not identical and you can hardly say that the AGI has self-replicated.

Can your AGI "quine out" a copy of itself or not? Can I look at the two programs on disc and perform a binary comparison to prove that they are byte-per-byte identical?


I'm not debating about whether an AGI can duplicate itself or not. If an AGI knows its own source-code, it can duplicate itself. I literally acknowledge this in the paper itself (section 5.2).

Rather, what I am saying is that it doesn't have any bearing on the thesis, "If an AGI single-handedly creates a child AGI, in such a way as to know the child's truth and source-code, then the child is less intelligent than the parent". And the reason it has no bearing is because, since an AGI cannot know its own truthfulness and source-code, AGI self-duplication does not satisfy the hypotheses of the claim.

Again, this is like if I claim "If x>2 then x²>4". The fact that 1² is not >4 has no bearing on the truth of the claim, because 1 is not >2. To make the analogy crystal clear, the "x>2" part corresponds to "the AGI single-handedly creates a child AGI in such a way as to know its truthfulness and source-code", and the "x²>4" part corresponds to "the child AGI is less intelligent than the parent".


Then true clarity would be to say "The AGI has no access to its own source or even compiled code, and therefore cannot self-replicate an identical copy." That's much clearer. These abstract terms like "truthfulness" and "know" are not obvious, and your scenario is a little baffling. Most people concerned about AGIs self-improving consider access to their own source code or even the ability to copy a directory as a given.

I keep using extremely concrete terms here, with specific examples, and you keep returning to this one sentence where a lot of work is being performed by words like "know." The idea that an AGI somehow "cannot" (is it forbidden by a directive? has the information been hidden away? is this a physical principle of the universe emerging from conservation of baryon number?) "know" (what does this word do?) its source-code just seems very far away from scenarios most people would consider.


I never said an AGI cannot know its own code, I said it cannot know both its own code AND its own truthfulness (while being truthful and satisfying certain other idealized assumptions). As for why this is the case: it's not an assumption, but rather, a theorem, one which has been known for a long time.

Here's an extremely oversimplified proof: suppose Turing machine #500 encodes an AGI X and that AGI X knows "My knowledge is truthful" and "I am encoded by Turing machine #500". Using Goedel's diagonal lemma, there is a sentence phi, in the language of Peano arithmetic, which expresses "Turing machine #500 knows the negation of phi". Since X knows himself to be Turing machine #500 and X knows himself to be truthful, X should be able to infer that phi is false (because if phi were true, then Turing machine #500 would know the negation of phi, so (since Turing machine #500 is truthful), phi would be false, a contradiction). So X knows that that phi is false. But phi is synonymous with the assertion that X knows phi is false---so, since we have shown "X knows phi is false", we have in fact shown phi is true. Absurd. Our only escape from the paradox is to acknowledge that X cannot know "I am truthful" and "I am Turing machine #500" simultaneously (and also really be truthful and satisfy idealizing assumptions that allow X to make the above inferences).

You're right that I keep using words informally, because this is hacker news and not a logic journal. If you want to see things spelled out in gruesome detail (warning: math), you can read here: https://philpapers.org/archive/ALEMTI-2.pdf


Does it not depend on the way of reproduction, is it only asexual? Do you believe you can make a gai that is smarter than you? If so can't the AGI make another AGI in the same way as you made it that is smarter than itself?


Indeed. This is a very curious and - IMO - unconvincing argument.

If AGI ever exists, it will evolve in a Lamarckian way, not a Darwinian way.

An AGI's ability to self-improve will be limited by hardware (computational throughput) and the stability and sophistication of the software.

The software will - presumably - be able to experiment with new configurations without the evolutionary expense of an entirely new body.

It will also be able to roll back failures without having to die, and potentially it might be able to search through a space of possible upgrades without having to instantiate them first.

Sexual or asexual reproduction are irrelevant because an A(G)I will not be defined by the usual animal limits. So it makes no sense to apply a model of life that's two centuries old to a new form of life that's very unlikely to play by the same rules.

Just making the attempt suggests that we're going to have a very difficult time understanding what's happening.


Playing by different rules does not allow one to defy the laws of mathematics.

In my original post, forgive me, I forgot to clarify that when I talk about an AGI creating a child, I mean doing so in such a way as to know the source-code and the truthfulness of the child. Obviously without that constraint, a monkey could randomly type an AGI's source-code just by dumb luck. But the lucky monkey would not understand the ramifications of what it had just typed.

If I create an AGI in such a way that I know its truth and its source-code, then I know a code for a certain computable ordinal---namely the supremum of the computable ordinals which the AGI knows codes for---which is larger than any ordinals for which the AGI knows codes. So in at least that one very narrow sense of intelligence, I am more intelligent than said AGI. No amount of "playing by different rules" will change that.


> Playing by different rules does not allow one to defy the laws of mathematics.

OK, but the rules in the paper seem pretty arbitrary and unintuitive to me, so by picking different rules one is no longer defying anything. In particular the requirements for exceptionally perfectly detailed knowledge of the child seem designed to make the conclusion tautologically true, but also not very interesting, as far as I can tell. "If I define impossible requirements, [thing] is impossible". OK, that's... fine, I guess.

Allowing for "trust" between AGIs to overcome issues with their properly knowing one another's "truthfulness" but not allowing the same shortcut when it comes time to judge their knowledge of the child's "truthfulness" especially seems odd. I'm also not sure how all this perfect knowledge is justified before we allow that an AGI has created a superior AGI—it seems impossible for any system, biological or AGI, so I'm not sure what use the conclusion is. "Perfect play always results in cat, in tic-tac-toe, so you cannot win this game" well, that's nice but we're talking about poker...


Thanks for the feedback.

Regarding "impossible requirements", does it really go without saying that knowing an AGI's truthfulness is so impossible? I could write a computer program which does nothing but print "1+1=2", "1+2=3", "1+3=4", ... onto the screen, and know (with "exceptionally perfectly detailed knowledge") everything about this program and know that the things it prints on the screen are truthful.

So it's not that it's inherently impossible to have such detailed knowledge about just any computer program. What makes AGI special? Why is it that we should take it for granted that it's impossible to know the truthfulness of an AGI's knowledge? I'm not saying that's not impossible (indeed, by tweaking the assumptions, the argument in my paper could be considered an argument for said impossibility), I'm just trying to understand why you think it's so blatantly obvious that knowing the truthfulness of the AGI's knowledge is impossible.


Right, I agree that we can prove things about algorithms, but it seems that if you define truthfulness such that an AGI can't prove that about itself then it follows it won't be able to prove it about anything more complex than itself—unless there's some property of "self-ness" that makes and AGI's self (or a copy of itself) especially hard to it to understand, which would be surprising.

I think the confusion in this thread's over what people normally think of when they think of an AGI creating a superior AGI, and this particular definition of "superior" that requires a a particular kind of knowledge of superiority, which is something we don't really do ourselves very much, so far as how we "know" all the various things we "know". I don't think it makes much difference to people whether an AGI "knows" its creation is superior by some measure because it's got certain knowledge of its "truthfulness" (and it may not be clear why that, in particular, should matter, even), or whether it's just statistically determined with 99.999999% certainty that it is—which, again, is basically how we operate for the most part, except we're rarely that certain.

Like, if one person makes an AGI that doesn't seem to be able to operate at a level higher than a rat, and another makes one that promptly becomes god-emperor of the galaxy, I don't think anyone's gonna get hung up over whether we know "truthfulness" of them both before saying which is the superior AGI. If an AGI does the same, I guess we can rest assured that it doesn't know, for a certain definition of "know", that it has—but it may "know" it in plenty of other senses that we use the term.


Really appreciate your reply. I hadn't spent much time thinking about what happens when, e.g., AGI X does not know that AGI Y is truthful, but rather, knows that there is probability at least 99.999% that AGI Y is truthful. Maybe there is some interesting mathematics down that rabbit hole. You've given me something to chew on! :)


Hahaha, I'm glad you were able to take away something useful from an ignorant jerk (me) heckling your paper :-)


Sorry, I forgot to clarify that when I talk about the AGI creating a child, I mean the AGI creating a child in such a way as to know the child's truthfulness and source-code. Otherwise, of course, a monkey could "create" an AGI by blindly typing source-code on a typewrite and getting lucky.

>Do you believe you can make a gai that is smarter than you?

There are different ways of measuring intelligence. For the measure addressed in the paper, no, I can't single-handedly create an AGI smarter than myself (at least not in such a way as to know its truthfulness and its source-code).


I have no idea what George Dyson means when he says Turing had a 1D memory and Von Neumann changed it to a 2D memory. I consider both to be equally 1D with Turing having sequential access and Von Neumann random access. What I call 2D memory is a segmented addressing scheme.

The implementation in the form of Williams Tubes (used in Von Neumann's Princeton machine) is indeed 2D, as is a modern DRAM or SRAM. But it is wrapped up in a 1D interface to the rest of the computer (a little less obviously so with RAS and CAS in DRAMs).


Has any body been anlemto secure funding from the NFX FAST seed program ?

It was supposed to be an online only application with a clear timetable.


Am I the only one who thinks it's weird Dyson rips off the title from Samuel Butler's seminal masterpiece? It strikes me as extremely disingenuous; certainly unbecoming his intellectual aspirations.


This is how I imagine 'mad scientists' justify their work.


I feel like calling VCs "mad scientists" is unduly ennobling.


"mad engineers"? /s



Having seen how scientists build things vs. how engineers build things, I don't see how that can be true. If it were, I'd expect the take-over-the-world failure rate to be somewhat less than the current 100%.


"Mad millionaires". Which is closer to the mark and invokes correct Hollywood clichés.


Rich people can't be mad, they're just eccentric.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: