Hacker News new | past | comments | ask | show | jobs | submit | more whaaswijk's comments login

I have some anecdotal evidence against this. Learning how to write automated proofs using Isabel and HOL definitely improved my ability to write proofs with pen and paper. Also, I wonder what is meant by “traditional activities”. Unfortunately the article seems to be behind a paywall so I can’t check…


I don't think writing proofs in Isabel is really comparable to regular programming. Note also that writing formal machine-verifiable proofs is a skill that few career mathematicians have. Writing regular proofs is a comparably much simpler skill, one that is universal among career mathematicians.

So, the fact that your practice of a very complex version of a skill also improved your ability to practice the simpler version of the skill isn't really surprising.

Coincidentally, my understanding is that many mathematicians find such deeply formal proofs hard to follow, compared to more informal ones. It would actually be interesting to know if your proofs have become more or less satisfying to a practicing mathematician after your experience with Isabel and HOL.


Well, others in this discussion are already questioning it, but this particular research was on 4th and 5th grade students using Scratch and on some basic math (arithmetic) concepts. I'm not sure Isabel would be appropriate for them. The submission title is pretty clickbaity, the actual title is more reasonable: Impact of programming on primary mathematics learning.

Which also conveys the level of math (primary math) being evaluated against.


I think the “doing your work well” part is often the question. Am I really doing all I can? Could I do better/more? This then leads to working more and being busier. Maybe the problem is that it’s hard (for some of us) to know what “well” really means.


I think "Could I do better/more?" is just a dumb question. As an employee, I have no plans to ever ask that of myself again. I'll get some things done that I set out to do, sometimes they'll take more or less time than I think, and the rest is either neutral or margin. There's no point in doing more. But you're right, you need a measure of what the right amount is, and I think that varies and can be hard to pin down.


This comment reminded me of this from the recent post of Sivers' Relax article.

In a marathon-like environment, you don't want to go to 100% for a very long time, because you'll drop to 50% or less for a longer time than maintaining at 85% for the majority of the event.

You can always do more, so the question then is: Do you really need to be doing more right now? Is this the best use of the boost before I need a break?

https://news.ycombinator.com/item?id=32626119 sive.rs/relax


This is one reason why I go all in on practices that are backed by data to correlate with business success (CI/CD, DevOps, etc.). Think the Accelerate book. Software development has the added difficulty of being quite a young industry (relatively speaking), but (with tongue in cheek) how can you be expected to do better than practices espoused by industry leading research and companies?


Thanks, that sounds like a good strategy. I’ll have a look at the Accelerate book.

> how can you be expected to do better than practices espoused by industry leading research and companies? True, but not everyone has reasonable expectations :-) In those cases perhaps all that remains is to either have a frank discussion or to part ways.


That’s actually not clear to me. Anecdotally I have often experienced the saying “opposite attract” to be true. I have had discussions with friends who hold your point of view. Of course it could also depend on how we define similarity (e.g in terms of race, ethnicity, personality type, etc.). I wonder if there’s some percentage of the population that’s wired to look outside of itself.


> “opposite attract”

This is neither right nor wrong. As is usually the case, it's quite a bit more complicated than that.

Most people who enter a relationship with someone else prefer a partner with a significant overlap in their demographic. Especially attributes like age, race, class, culture, height, hobbies, ambitions, etc. This is not universal, of course, but it is far and away the most common case, and is supported by both research and casual observation.

The most robust romantic relationships seem to be when the two people are largely similar but whose more minor differences complement one another in a positive way. One way to summarize it: A strong team of two is more than twice as effective as a single individual. Relationships that don't last tend to be because the individuals cannot find enough common ground to be happy in the present and don't see a positive future (the most common case), or because they are _too_ similar, grow bored of the relationship and feel like they are missing out on some critical piece.

There are outlier couples who seem to do just fine despite apparently massive difference differences in personality, culture, and so forth, but they are just that: outliers.


I mostly agree with your post. I’m just curious about which variables people care about most and how accurate your “far and away” statement is. For example, a quick google search suggests that if we take race/ethnicity, the following study shows that 17% of newlywed couples in the US are intermarried.[1] A minority to be sure, but a substantial one. Of course this is just one variable. It could be that such couples would be very similar wrt other factors. And this says nothing about the long-term potential of such relationships.

[1] https://www.pewresearch.org/social-trends/2017/05/18/1-trend...


Is there a mechanism in place to stop people from laundering money with ethereum? This is a serious question, I’m a total noob when it comes to crypto.


You can’t stop crime, otherwise murder wouldn’t happen. If you give people a modicum of freedom then some amount of them will act with criminal intent.

Arguably the blockchain makes detecting and prosecuting money laundering easier than our opaque legacy financial system.


Right, in the same way that in the hypothetical Blockchain utopia anyone seeing you eat at a coffeeshop can also see what porn sites you visit, how often you shop at the dispensary, and which escorts you prefer to visit.


What explanation are you referring to though? Just saying consciousness is an emergent property is not an explanation. Rather it is an assertion.


It's a working hypothesis where the element of "HOW does consciousness emerge out of computation" remains unanswered, but you could argue that you shouldn't assume the hypothesis is insufficient until it's shown that the question is not answerable without adding extra assertions (that there is something beyond computation).

Just like we stick to the "planetary orbits are only shaped by gravitational interactions" hypothesis, and if we observe deviations, we try to exhaust all possible explanations that remain gravity based before introducing the possibility of other forces at work.


I’m not sure the analogy to gravity works. At least in the case of gravity we have a model which (largely) explains the planetary orbits. As far as I know, we are not even close to a model in the case of consciousness. And even if we had a model for the “easy problem” it’s possible that the “hard problem” would still remain.

Edit: to be clear, I’m agnostic on this problem. I just don’t really like the emergence “model”, where we have a bunch of supposedly non-conscious matter and if we put enough of it together in the right way consciousness just pops into existence.


Grass and trees and such. Perhaps bushes and animals too.


Are there any experts that can ELI5 this “breakthrough” for us? How much closer does this bring us to a useful number of logical qubits?


Not me. But when I read this:

"Figuring out how to feed the world or cure it of climate change will require discoveries or optimization of molecules that simply can’t be done by today’s classical computers, and that’s where the quantum machine kicks in"

I was like "hopefully it will lead to better bullshit meters because mine just exploded"


:-D


Serious question: how can consciousness be an illusion? I’m not even sure what that would mean. One could argue that many things are illusory (e.g. the external world). But it’s quite hard to dismiss the notion of actual experience occurring in the world. Our experiences could lead us to false conclusions, sure, but we cannot deny their existence.


The experiences are real, but the homunculus behind your eyes that you think of as yourself is an illusion. In reality there are a bunch of fairly independent processes talking to each other, sharing the perceptions, reacting to them, feedback from those reactions again become a perception, a "narrator" process that often gives running commentary in your mother-tongue, etc. The interaction of those processes feels like a unified thing, "you", and that's the illusion. This illusion has adaptive advantages, because it helps the organism take care of its needs, so evolution selects for it.

That's my take, maybe not exactly the same as Damasio and Seth's, but compatible with theirs.


If I understand correctly, you're referring to the "easy" problem of consciousness, i.e. the "mechanistic" explanation of how a self could be constructed by the brain. That's an interesting question and I think your take is a coherent and plausible one (from a materialistic perspective). However, I still think this doesn't get around the hard problem of why these interactions actually feel like anything. I've never heard a satisfying materialistic explanation of that. Do you believe the interactions could in principle be implemented on any Turing machine? Or are they substrate dependent?


Actually I do; but... 1) such a Turing machine may have to be a lot more powerful than anything we currently have. With or without invoking QM mechanisms, there is reason to believe that every single neuron does a lot more computation than our simplistic models in current ML neural nets. 2) it may not be possible to "program" a machine to be conscious in the way we feel consciopus, we'd probably have to literally evolve it, i.e. in a rich simulate environment starting with simple artificial "organisms" that "feel" this environment and then getting progressively more complex.

But I do believe in Wolfram's "principle of computational equivalence", and thus that anything that can implement a turing machine can also implement any other complex system, including consciousness.


I guess this is where we differ. I don't see a sufficient reason to believe that increasing computational capacity/complexity alone gives rise to consciousness. Moreover, I think there are common sense reasons to believe that consciousness is not substrate independent. Therefore, I don't see it as obvious that Turing completeness is sufficient for consciousness. For example, as someone else on this post has pointed out, a sufficiently complex water pipeline can implement a Turing machine. However, I doubt it would ever be conscious, no matter how large we make it. I think representing and processing information is orthogonal to experiencing.


I think we do agree... "complexity alone" certainly will certainly not give rise to consciousness. Consciousness begins with feeling and separating the "I" from the "other"; I feel hunger, that there is food. That's why I said we'd have to evolve it in a simulated environment, one in which there are things for a nascent consciousness to feel. So in that sense yes, it depends on the substrate, but the substrate could be virtual, simulated on a powerful enough Turing machine.


You haven't explained why you think they shouldn't "feel like anything". How do you distinguish "feeling like anything" from anything else you experience?


Well, as far as I'm aware notions of feeling or experiencing are not accounted for in our current physical models. Does an electron feel anything? On the one hand, if it does, it seems to me like physical models have to be extended to include some primitive form of consciousness. This would be something like panpsychism. On the other hand, if single electrons do not have consciousness, why do large collections of them in specific structures have it? Note that, to me, it seems insufficient to say that collections of electrons can be used to model or compute with. Namely because it raises the question of why this modeling has a feeling tone (qualia) to it.

Finally, I don't know if I can make a meaningful distinction between feeling and experiencing. I believe a feeling is an experience.


I think they were referring to Daniel Dennett, not Antonio Damasio or Anil Seth. Damasio is interested in how the self is constructed (so, similar to your idea) while Seth is saying that experience is a controlled hallucination. Dennett is the one claiming consciousness is an illusion. These are three very distinct research programs.


Except that they all agree that there is no "hard problem of consciousness", that's the part that's illusory. I think Dennett takes the illusion position too far, so I think I'm somewhere between those three. But there are others who take it further even, such as Graziano in "Consciouness and the Social Brain" whose "consciousness is an attention schema" position seems so nonsensical to me that I can't even explain it despite having the read the book.


Either it's inane wordplay or it's obviously nonsense for most definitions of conscious. An illusion must be perceived, so in that sense the proposition is self-refuting. Also, you have empirical evidence against it: you know when you're conscious and that sometimes you aren't (and that it's a spectrum.) Thirdly, you can behaviorally define conscious. Fourthly, if I'm not mistaken, you can measure its presence neurologically. To say that all of this is an illusion is stupidity or being very obtuse for attention (like clickbait.)


I think many people are reluctant to use non-traditional pronouns. I’m not trying to condemn or condone that, just pointing out that to me this example seems unrelated to autism.


Would you like to say a bit more about why that's not a valid argument. To be clear, I'm not saying it is (I don't know enough about the subject to do so) but it doesn't seem that far-fetched to me. Isn't similar probabilistic reasoning used to explain why evolution by natural selection gives rise to various complex life forms? If so, do you also think that that reasoning is shoddy?


Let N be the number of places life could arise, and p the probability that life arises in one of those places.

That argument is basically "there is a value of N such that for any p > 0, N p is much greater than 1."

But that's obviously wrong. For any N, there are values of p > 0 that make the product N p arbitrarily close to 0.

The dim intution behind the argument was that p can't be "too small". But given our current understanding of OoL, that's not a justified assumption. p could be exponentially small, if OoL requires some extremely unlikely step.

Natural selection is great once the system's reproductive fidelity is good enough to support it. The problem is bridging the gap from small molecules to that system. The smallest system we know of that can independently support Darwinian evolution has billions of atoms.


In this formulation, isn't p^N the probability that ALL places where life is possible, actually has life? It makes sense for that to approach zero.

What we want is the probability for at least one other place other than ours to have life. This would be 1 - (1-p)^N, which does tend to 1 as N gets arbitrarily large.

To get that formula: (1-p) is the probability that life does not exist in a place, so (1-p)^N is the probability that ALL places where life is possible, has no life. Therefore, 1-(1-p)^N is the probability of the opposite of that (where at least one place has life).


For a random variable X taking on non-negative integer values (here, the number of occurrences of life elsewhere in the universe), by Markov's inequality the probability that X = 0 is >= 1 - E[X]. Here, E[X] = Np, so if Np is very close to 0, the probability that X = 0 will be very close to 1.

That the probability goes to 1 as N goes to infinity FOR FIXED p is just another example of assuming p can't be "too small". The probability also goes to zero as p goes to zero. Why are you fixing p and not N? Why are you assuming p is large enough that N is in that asymptotic range where the probability has approached 1?


That seems right, but from a scientific point of view (as opposed to, say, a certain sort of theological view), two occurrences is not much more than one (even though one is so much more than zero.)


Two occurrences would actually be much more than one! Our own existence is useless due to observer selection, but discovery of even a single other independent OoL event nearby would allow us to infer OoL cannot be too uncommon.


Observer selection does not eliminate us as evidence for the proposition that life can exist. As for whether it is rare, you added the qualification 'nearby', and while it is true that it is most likely that any extraterrestrial life we detect will be nearby, the post I was replying to was arguing about the universal probability of life coming into existence, not about whether it will be discovered by us.

Furthermore, proponents of an extraterrestrial origin of life on Earth will doubtless argue that nearby life may have had a common origin.


Observer selection means p > 0 (ie the inequality is strict) but it can't tell us any more. Bayesian reasoning from our own solar system can put a reasonable upper limit on p but that isn't very helpful.

However, if we found life on Mars that same Bayesian reasoning would imply a meaningful lower limit on p as well, since life on Mars is independent of our existence to observe it.


If we found life on Mars that was independent of life on Earth it would imply a meaningful lower bound. Even finding a fundamentally different biosystem on Earth (life that didn't use nucleic acids, say) would be informative.

Just finding life on Mars that's the same kind of life as on Earth would not tell us much, as it could be explained by panspermia. There are Mars rocks on Earth, so transfer of life in those rocks should have happened constantly. If early Mars were habitable it almost certainly had life, due to this transfer.


> However, if we found life on Mars that same Bayesian reasoning would imply a meaningful lower limit on p as well.

If we found life on Mars tomorrow, how well-defined would that lower limit become?


This explains why it may not be a sound argument, not a demonstration of its invalidity. The distinction matters, because while invalid hypotheses can be summarily rejected, valid ones might turn out to be right.

Of course, if some people don't understand that this one is not an established fact, and that annoys you, I can't say you are wrong.


Yes. Of course, I was not arguing that life must be rare, I was arguing that the evidence we have does not compel one to believe life must exist elsewhere in the universe. The opposite of belief is not belief in the opposite.


There are rare instances where people say that life exists elsewhere, other just state that there is a possibility > 0.

I agree that it is arbitrary that the dimension of the exponent of n has to be larger than the negative one of p. That probably stems from the assumption that the universe is endless.


> p could be exponentially small, if OoL requires some extremely unlikely step.

"exponentially" is not a measure of size, nor is it a measure of relative size. If you think this anything base on "exponentially small" is a valid argument, go look in a mirror and slap yourself.


The meaning is clear in context. Try reading what I wrote in good faith rather than searching for a gotcha.


If you don't like me repeating your words to you, maybe you should look in a mirror and slap yourself?


Ok, I will spell it out.

"Exponentially small" here means "the probability could be ~ e^-n" where n is a number proportional to the complexity of the minimal evolving system. This would happen if there's some gap that has to be bridged by random chance before we get a system capable of sustaining natural selection.

The point here is that this could easily be vastly smaller than 1/N, where N is (say) the number of atoms in the universe x age of the universe x rate at which atoms might interact to form such systems.

I think you could have easily understood this point if you had made an effort to do so, without me having to spoonfeed it to you here.


If you think my point has anything to do with math, maybe you should go look in a mirror and slap yourself.


What about the argument that our existence is some evidence that a Bayesian estimates of p can't be so small that N p is less than one?

You're focusing on (lack of) evidence for a mechanistic explanation but that's not exhaustive.


The problem there is we don't know the "world" of possibilities from which our existence was drawn. It might be the universe (which I read "observable universe"), or it might be out of a large number of causally disconnected universes, or even other branches of a universal wave function (in a Many Worlds interpretation). The "N" there is not the same as the "N" of "our universe".


We know approximately the lower bound of N, which is the approximate number of stars in the observable universe multiplied by an informed estimate of the expected number of planets within the goldilocks zone. That's usually what people mean when they discuss N. N could be that, or it could be much much larger, but I think it's fine to limit the discussion to the lower bound, we still have a huge N to work with.

Also I think you missed my point which is about Bayesian estimation of p, not of N.


I ignored the comment about Bayesian estimation because I couldn't turn that comment into something that made any sense. Perhaps you could explain in detail what you meant?


Your statements in this thread have assumed we have no info to work with (as far as estimating p goes) because we have no understanding of the mechanisms behind how life came to be. But this ignores the evidence that we are here, which is info that can be used in a Bayesian framework to estimate p. The fact we exist, as well as information about how many billions of years it took for us to evolve, contains significant information about p.


I still don't understand. We have no useful lower bound on the probability that life arises, so how does Bayesian reasoning bootstrap to any meaningful lower bound?


Who said anything about a lower bound of p? I was talking about a lower bound on N, not a lower bound on p.

Bayesian reasoning (by using the fact that we exist rather than don't exist, as well as other info about our existence, such as how long it took us to evolve) helps us estimate a probability distribution of p, as well as a central tendency estimate.

See e.g. https://www.liebertpub.com/doi/full/10.1089/ast.2019.2149


But selection can happen with autocatalysts as well. I agree that you can't say life /has/ to exist elsewhere, but I think the trend in research has shown that life seems likelier and likelier to arise the more it is studied.


"Trend in research"? How could that possibly work? Research will tend to clear the low hanging fruit early, which means the easy steps. This tells us nothing about how difficult the difficult steps (if any) might be.

The analogy I like here is those "collect the letters" games you see at fast food outlets and grocery stores. Buy a Happy Meal, get a scratch off ticket. If you collect all the letters in some phrase you win $N million. When you start the game, the trend is great. Letters are arriving and the phrase is filling in. But try as you might, that last letter never shows up. The game ends and you've won nothing. Of course, the way the game was designed was that last letter controls how many winners there could be. All the rest were distractions.


It does however tell us that the "easy" steps are easy, which was never a foregone conclusion. The other steps will remain what they are. It doesn't mean the trend will continue.

I find it weird to use a deliberately rigged game as an example. If one of the previous letters was wrong, the last letter being right means you don't win either. It's like saying the difficult steps are going to be extra difficult because other steps were found easier than expected.


The point is that if you have N independent boolean random variables X1 ... Xn, establishing a lower bound on the probability that some proper subset of the Xi are true doesn't provide any useful lower bound on the probability they all are true.


Sure, my point was only that if the lower bound on the subset is higher than anyone expected, that will increase the probability of them all being true compared to your prior belief. And it will also increase the probability that life is more common.

You could argue that the priors were garbage I suppose. I'm not arguing for any particular probability.

The McDonalds example does not have independent variables as X1..Xn-1 are deliberately increased as Xn is decreased. I'd also argue that origin of life doesn't have independent variables. If chemistry turns out to be more or less powerful in one setting, it should do something for our assessment of other settings, especially when it's similar processes.


The prior belief must have been based on something. Where does a prior belief that ET life must exist with at least a certain probability come from?


The question becomes what do you think the hard step is?


It's not up to me to show that, since I'm not claiming life is rare. It's up to the person making the strong statement that life is (not just could be) common to convince me that there is no sufficiently difficult step. All I need to do is plausibly argue there could be a difficult step. Pointing out the complexity of all known self contained systems capable of Darwinian evolution is sufficient for that.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: