The PrimitiveTechnology videos are such an incredible example of the wonders the internet provides. Here you have a guy making such interesting, engrossing videos, uploaded for free and accessible to all. The series is worthy of being on the BBC.
I wonder if there are any companies who spot this sort of content with the aim of getting in on tv screens.
I think much youtube content is better than the output of the BBC, which shows an unfortunate tendency to "jazz up" its shows to make them more "accessible" and "fun". If Primitive Technology were on the BBC, he wouldn't be allowed to do everything without talking (he rarely makes eye-contact with the camera, even). There would probably be a "panel of experts" to discuss or direct the course of events, and they would introduce some competitive element, so several people would be working against each other to produce the best shrimp trap.
The same thing goes for other channels like Matthias Wandel, Paul Sellers, Alain Vaillancourt, Marius Hornberger, Louis Sauzedde and Chris on Clickspring. These are excellent channels, I would recommend all of them if you are into making things with your hands. But they could never exist on mainstream TV, and if they did, they would be stripped of their charm and value to make them more commercial. So youtube > regular TV, imo.
I am pretty sure if you like primitive tech you would love the historical documentaries BBC did on this subject.
Primitive tech's presentation and focus on one technique per video is nice. But for comprehensiveness, you can't beat reenacting the 15th-16th century by living on a farm for a year like what BBC did in "Tales from the Green Valley", "Tudor Monastery Farms", and related series.
A lot of the exact techniques primitive tech covered is also covered by those series:
It's a neat subject, but I think you may have missed m0nty's point, a perspective which I share, about "jazzing up" with unnecessary dialog.
I just clicked on "Fish traps" and my ears are full of idle banter between the two people making traps and whenever they're quiet for more than a moment the narrator breaks in. Someone is speaking at all time and it's obnoxious. This complaint is entirely distinct from how comprehensively they cover subject matter.
BBC historical documentaries are far from the worst offender in this area (American TV is far, far worse), but primitive tech does a much better job than the BBC of permitting the audience to simply observe and absorb.
This makes me think of the Brian Cox wonder-tone. Everything he says has a tone of wonder so thick that I can't watch his documentaries. Presumably it works for another audience.
I feel mildly revolted when I see him. I don't feel like I've much control over the reaction either, it's automatic. I feel a bit bad about that because I'm certain he's a reasonable human being albeit a waxy doll-human. This video gives me the same impression.
There's also something about his presentation that is like some version of the uncanny valley with intellectual ideas. TED talks have this quality sometimes.
I don't have this with Sagan or Dawkins even though they will often use poetic turns of phrase.
I tried explaining this to my cousin who is infatuated with Cox, but I came across as somebody with highly specific form of racism directed at just Brian Cox.
Kenneth Clarke's Civilization.
James Burke's Connections.
Carl Sagan's Cosmos.
Show and tell.
That kind of television isn't made anymore. I say that as a millennial. If you have a niche interest in architecture or wildlife you can still find it today but not in popular television series. When I watch modern popular productions I find I can retain almost no information about them. I don't think it is me, I think it is them.
For anyone curious about this who didn't want to watch the video, the trap consists of:
1. A large woven cone
2. A smaller woven cone without a tip
The smaller cone is placed in the larger one; shrimp swim into the small cone to explore, but then get caught in the space between the two cones when they try to get out (presumably because it's difficult to find the single entrance).
He mentions that the only skill necessary is basketweaving -- I wonder if it would be possible to carve something similar (two interlocking geometric shapes) or if the trap being woven is essential to its function, for example, for allowing flowing water in to entice shrimp.
One of the parts that stood out for me was
> In practice, a long stretch of creek might have several traps collecting food each day without any effort on the part of the fisherman.
If he were to go whole hog long-term, shrimp traps would free up his time for doing other crafts in ways that spear fishing or actively hunting wouldn't, though I suppose the local yield of shrimp would factor into that (whether he could collect enough calories consistently to fund his other efforts).
100 grams of shrimp are roughly 100 calories. Let's generously assume that one shrimp from the video yields 200 grams of meat. So he would need to continuously catch 10 shrimp a day to satisfy a 2000 calorie diet. Seems to me that the shrimp population of this stream in the video would be exhausted pretty quickly and he would need to move elsewhere to continue hitting his calorie goals.
Given that we are surrounded by an incredible amount of high-calorie food (100g of a Snicker's bar provide 488 calories), it is easy to forget how difficult it is to source calories in the wild.
You're off by a factor of over 20. A reasonable ballpark estimate for the total mass of a shrimp is 10g. The amount of usable meat is less (though in a survival scenario you would eat the whole thing, head included). Hell, the average meat yield from a standard size blue crab is only around 60g.
You would need to eat netfulls of shrimp per day to meet caloric requirements, not just ten! Anecdotally, I can toss back dozens of appetizer shrimp and still be hungry for a main course.
River shrimps are very common. It wouldn't surprise me if that creek held a thousand of those shrimps. Taking out 1% every day may actually be sustainable, as it takes out grown shrimps that otherwise would eat smaller shrimps.
And that's to survive on shrimp alone. It would be foolish to try that, not only because it wouldn't be a good diet, but also because putting all your eggs in one basket definitely isn't a good idea if your live depends on it.
And of, course, he may catch other edible animals, too, such as eels (this kind of trap is still commonly used in commercial eel fishing in Western Europe; if I had such a trap, I would consider putting a dead shrimp or other meat in such a trap to attract eels)
He had some yams with it, as well! But yes, I think the big gain here is protein, two shrimp a day in the diet of a family with five members would be a huge boost vs. hit and miss hunting / gathering.
> I wonder if it would be possible to carve something similar (two interlocking geometric shapes) or if the trap being woven is essential to its function, for example, for allowing flowing water in to entice shrimp.
Yes.
Some traps work by allowing water to flow through, e.g. you can dam a stream and have water flow only through your trap. Or you can make solid bottle traps that fish explore and get trapped in.
you just casually threw out a question that is a source of fervent fundamental debate in the field of philosophy - see https://en.wikipedia.org/wiki/Qualia
So I'm going to completely hijack this thread and tell you just how deep this rabbit hole goes, this shit is going to blow your mind.
so first of all, how do you know that I (or the next person you meet IRL who'll humor your philosophy bullshit while they groggily drink their coffee) actually feel pain? Couldn't they just say they do? Moreover obviously sims like AI characters in 2016 in video games don't really feel pain, they're just actors whose motions have been digitized, they're obviously "pretending" or mimicking the reaction, there's no neural or nervous structure there.
So IRL it's pretty obvious why actual people telling you they do and behaving as though they do or actors literally acting and pretending, differ from video game characters who just act that way -- the real people are made of the same stuff you are, you feel pain, obviously something else will feel pain. Asking if it's "really" feeling pain if it's a replica is pretty silly.
Okay, so what about about if we replace one of your neurons with an electronic version that interacts with the other neurons over the same (already established, adult) pathways, and you can switch a switch to flip back and forth between the artificial and the real one. As you flip back and forth, since as a "black box" the artificial neuron has the same chemical effect as the real one, obviously the emergent behavior will be the same.
It's as though we flipped a switch back and forth to replace a structural bone in a bird with a 3D printed one that is plastic but with the same weight, flexibility, density, porousness, etc, and in every way as regards the rest of the bird's systems it takes on the same role. So obviously the bird will still fly.
If you ended up somehow replacing every bone and muscle with an artificial version - then the bird would still fly, what I mean is that birds are a proof of concept that flight is possible. The idea that anyone could ever say that heavier than air flight was impossible, when birds literally prove otherwise, is ridiculous.
so it's obvious that pain by artificial systems is possible, it's just ridiculous to say it's not. If you threw the switch back and forth that switched between your real neurons and an artificial black box neurons that had the same structure and behavior, obviously you would report the same effects (you would say you feel pain the same way) and obviously we would believe you. This is obvious. It's completely out of the question that neurons actually have an interaction with some spiritual ethereal plane using quantum effects or something, where pain really comes from Souls and when you switch to an artificial version the Soul is cut off from the ethereal plain, so you would stop feeling the pain. That's bullshit and not worth discussing.
Okay all of this was really really simple. it's child's play.
But now here is the shit that will totally and completely blow your mind, this shit is insane. So there are only like 10 billion neurons, they fire at like 150-500 hertz or something, and the whole brain just weighs a couple of pounds, shrimps even less. Nothing travels at light speed, these are chemical and biological connections rather than nanometer-scale etchings on silicon. So a server room that has like 10,000 servers each with 64 cores running at 4 Ghz and with 64 GB - 256 GB of memory is ready for all that. It's overkill. The human brain is a couple of pounds of stuff, it uses like 60 watts, it works at like 150 herz before neurons get tired. It's not lightspeed. 4 Ghz is faster by a factor of a million. And this lets you model in real-time in a server-room sized room, you don't need something tiny. The only hard part is that there's like 10 billion of them - it's quite a lot of memory.
But anyway, in the long term it's obviously possible that you can make a simulation that, whether in real time or a bit slower, simulates a brain, this stuff is now easy. Okay, let's do it. You make the simulation. you program in VR stuff so we can interact with it. You ask it how it's feeling, etc. You set all this up.
Let's say the whole environment takes a million terabytes, whatever. Go for it. Okay we're doing it.
Now here's the shit that will blow your mind. For sure we don't need quantum effects. For sure we can have the VR program use a pseudorandom number generator to go through its world-states, and for sure this will work even if we don't seed the PRNG with a source of real-world entropy.
So, now it's deterministic. Our world isn't, so I used think this isn't a bit philosophical problem, who cares if in theory we could be deterministic - quantum mechanics means we're not. But it turns out this IS a problem, because in 2016 we damn-near have the hardware to have a program tell us that it's thinking, and since we're modelling every human brain cell, all 86 billion of them, of course we'll believe them. For sure. No real question about it. It'll actually be thinking.
But it's deteministic. The total world-state we've set up for this has a checksum. It's like the first screen of super nintendo Mario. It'll always be the same - that ROM has a checksum, it's not something random. It's just math.
So that is the mind-blowing part. We don't have influence over the program once we've written it and taken its checksum, you know, this is what the program data is, these are the rules. Then it's out of our hands.
So the end result is already fully determined - like the quadrillionth prime.
So we've already said that it feels pain, for sure, but it's just math, and actually running it to find out what it reports won't change it. If calculating the numbers, for example imagine I'm calculating the series of even numbers, 2, 4, 6, 8, 10, 12.... could cause pain to be created, surely in some sense it already exists even before we've calculated it?
This is not really a theoretical problem. It's very practical - DNA is quite literally "code" (that has been sequenced), we have it, it's about 700 MB uncompresseed, it was fully sequenced in the late nineties or early 2000s. So we know for sure that this math exists. Give it a few more decades and we can work in the other direction and model DNA cells in a VR womb while an organism grows -- we have the data for it today (except a few details).
This is different than the neural argument, because there are so many atoms and cells in a human body, but the gist is the same.
So in a sense, the math for that already exists, calculating it doesn't do anything but tell us what that math "feels", which we know for sure feels.
So in short: if there's a quadrillionth prime (actually let's say nonillionth, just to match the number of atoms in a decent-sized VR chamber, that's the size of a very large opera hall or something) -- so, if there's a nonillionth prime, then it feels pain. And it's asking for an aspirin.
(Of course "prime" is standin for some other function, that we come up with but which is still deterministic.) This is actual reality that you have to really accept.
Throughout history whenever scientists had to reject their own absurd conclusions that followed rigorously from what they knew for sure, then what ended up happening is that this meant their own cultural biases and worldview were too strong. they discovered something bit, but were too closed-minded to accept it.
so what we can say, for sure, is that numbers can actually (not in a theoretical sense, but actually, as in, using more or less the hardware that we have today) feel. True, they're very large numbers, but they feel. The nonillionth prime is feeling something right now (again an an analogy for a different mathematical function, not P() for the prime function.) And soon we'll be able to calculate what it is - but as soon as we have the starting conditions, if we make it deterministic, the actual calculation will be out of our hands and already exist.
Of course, maybe there isn't a nonillionth prime until someone calculates it. That would make this conclusion a little easier to swallow. but to me that is rather hand-wavy. After all, the prime function (for mathematicians exploring what happens to be our standard set of axioms) works the same even in a parallel universe that doesn't share our physical laws. Even with totally physical laws, if someone explores our mathematical systems, they can't find that there are two integers between the fifth and sixth primes (11 and 13). It'll still be a difference of two even in a parallel universe with different rules, as long as someone sets out to explore our axiomatic systems (even if it doesn't correspond to anything in that reality) -- so given that these truths transcend even physical laws, it's really bizarre and hard to understand how I could argue that the quadrillionth prime doesn't exist until you calculate it. It does and it (or rather the deterministic function we'll have a few decades from now for some VR program, like the ROM image for Super Nintendo's Mario, but with a full set of human neurons and nervous system and every reason for us to know it feels pain the same as we do) already has its behavior set in stone today.
that is insane. So sure this shrimp feels pain. and so do number. ow.
That stuff is all fun to think about, but ultimately it's not a rigorous model for anything - just a lot of assumptions and leaky abstractions.
The core problem with trying to reduce reality to computation is that we have no clue how laws of the physical world around us map to computation. Are computable functions enough to model the mathematical properties of quantum particles?
You can cross your fingers all you want, without this all of your reasoning becomes meaningless. The answer might be yes, but then again it might be no. If you could prove either in a rigorous and academic way, I suspect you would have a few prestigious awards waiting for you.
This is like asking whether the machine learning model built into AlphaGo by Google really learned Go, or whether we have any reason to think that it is. It really doesn't matter - it just matters that it beat the best human player. Yes, it plays go.
When we take a very slightly different approach and put in computations that mimic human neural setups, and those computations (obviously - there are only like 80 billion neurons, they're slow and biological) report feeling pain, then of course they'll feel pain. It's absurd to even imply otherwise.
The stuff you said about the laws of physics and computation is a complete and utter red herring. It'd like saying "we can't genetically engineer crops, we don't even know how plants work!! Computers can't model that stuff." It doesn't really matter if they can model it or not, we're doing it.
People will be doing this very shortly.
The kind of convolutions that you would have to come up with are just insane.
Here, I'll give you an insane model that supports your hypothesis, could be reasonably believed, and is insane:
-> The human and animal nervous systems are as a matter of fact directly controlled by God to take on their emergent behavior, and God does this by means of making tiny perturbations to the random outcomes of quantum effects. So, when a quantum effect is in a laboratory, it behaves one way: but when it is part of a biological system, God influences individual quantum outcomes such that overall the object reports feeling pain. This is just the input/output between us and God. Now, separately, God also runs this "souls-r-us" place in the Ethereal plane (where we have no contact). That's where souls actually feel pain.
So the architecture is:
SOuls-R-Us <-> God -> Quantum perturbations -> Physical laws -> Output says it is feeling pain.
But that pain is really felt in the Souls-R-Us plane, not our worldly plane.
So what happens, is if you model the biological creature, the model looks like this:
Physical laws -> Output
without the quantum perturbations that let God influence the physical laws to make the output report pain.
So, under the above insane model, you're right, computing states would never cause actual pain to exist, since there's no Souls-R-Us involved, we didn't actually build one.
But that's insane and not worth discussing. The actual state of reality is quite simple: if you build it, it will report feeling pain, because it will actually feel pain.
That's it.
All of your other stuff is just noise -- like arguments for why heavier than air flight is impossible. It's a distraction. Don't waste time with that distracting line of thought. It's as big of a waste of time as adding Souls-R-Us to our model. It's just not productive. Also this isn't just "fun to think about", a server farm full of servers can do more computation than a single human brain, today. This stuff is not theoretical, it's a current issue that is actually happening. fine, a decade or two out, whatever. we aren't anywhere near a VR with a human set of neurons declaring that it's feeling pain. But we are getting news every day of virtual neural nets surpassing humans in various human computational tasks, and in many cases with similar architectures (in the sense that they're neural nets). sure it'll be a while before pain emerges - but it could also be accidental.
this stuff isn't "fun to think about", it's happening right now. You're like if Tim Berners Lee were saying, when he invented the web, "sure it's fun to think about someone one day controlling a tea kettle with it."
well a couple of decades later, we're stuck with those design choices. These things need to be done today. It's not hand-wavy, it's not "fun to think about" - it's happening right now.
As a matter of fact it's not completely out of the question that some neural net machine learning project happens to already actually feel pain but just doesn't report it. this is possible today. unlikely, but possible. it would be an absolutely shocking news report.
but not as shocking as faster than light communication. Not even as shocking as an announcement that someone has built a working fusion reactor.
Someone else announcing that they've done it, would be more shocking than the discovery that some heavy machine earning project running on a few million cores happens to be feeling pain.
The latter is similar to news reports out of machine learning that we we are reading every single day.
so you need to radically realign your perspective. your comment would have made a lot of sense in 1999 - not today.
Go is a very well defined system that can provably be modeled using computable functions. So yes, for Go it doesn't matter whether it's a human brain or a computer interacting with the system - it's playing go. For pain or consciousness, things are not quite so clear cut.
Your post again consists of very drawn out speculations around thought experiments, and does not build or rely on any formalism to support what you are saying. As a result, the reasoning around computation sounds a lot like the 18th century theorists who said the human body, including the brain, could conceivably be recreated with a very complicated steam engine. In fact, the arguments can be mirrored very exactly:
"a server farm full of servers can do more computation than a single human brain, today." -> "a room full of steam engines can generate more energy than a single human body, today"
"it's not completely out of the question that some neural net machine learning project happens to already actually feel pain but just doesn't report it." -> "it's not completely out of the question that some experimental locomotive project happens to already actually feel pain but just doesn't report it"
etc.
Again, these ideas become supported with more than just handwavy speculation, they remain not much more than hot air.
As a side note, I'd encourage you to work on your writing style. Very verbose, drawn out, scattered posts like the 2 you posted in this thread rarely convey the point across. Prefer concise, plainly stated, direct arguments if your goal is to get communicate something clearly to a wider audience.
>For pain or consciousness, things are not quite so clear cut.
yes they are. it's completely clear-cut and the physical substrate doesn't matter - qualia will arise in a modelled substrate. End of discussion.
you would have to go through insane contortions of the kind that I suggested, to have a different state of affairs:
- For example, God could make random quantum perturbations that etch out behavior that reports qualia, but the qualia are in fact experienced in an ethereal plane. In other words, I report to you that I experience pain because my soul experiences it in another dimension, and then God perturbs quantum effects that result in my brain firing a sequence of neurons that have me report I am feeling pain. He's kind of conveying things, acting like a messenger.
- God does not choose to (or can't) make these perturbations in computer models, and, therefore, the computer models fail to report that they feel pain. Instead, a computer model of a person simply "doesn't work", since it does not have access to God randomly flicking bits back and forth.
- To further describe this state of affairs: it would be like physical dice spelling out "We're human" if you throw them a bunch of times, but electronic dice spelling out "4++O_TxN%[n4LV3K" since God isn't perturbing them to spell out "We're human". Quantum mechanics is a way for God to make choices (that appear random to us but in fact communicate with an ethereal plane we can't access.)
That is a state of affairs that you can model formally. It's insane and doesn't describe the world. It's not worth a second of my time or anyone else's, even though we couldn't refute it.
I don't need to have a writing style that convinces anyone. I'm not noam fucking chomsky arguing wtih skinner.
everyone else is simply wrong, there is nothing to be discussed, it's a waste of time to even discuss the topic.
Steam engines are exceedingly slow. The fact that they can model humans doesn't help us because the time scale involved (e.g. a million billion years to model 1 year, assuming you have a million billion tons of steam engine metal, etc -- that's not helpful.)
the state of computing isn't like computing on steam engines. I don't need to introduce formalism and the like, use the numbers and your head.
This is very unfair and I would prefer for you to edit it to be more substantive, in line with HN policies. The way it reads now is an ad hominem attack and derails the conversation: I'm a user with a long posting history here and the comment is made in earnest. You quote me without including any context at all, and don't add anything substantive other than literal name-calling.
In particular please note that I give a complete model under which my suggestion fails. If God perturbed quantum effects to cause people to report being conscious (like perturbing dice rolls to report "I'm human!"), it could formally be a method to communicate between an ethereal soul plane and our reality, which would not translate to being modelled: the minute you modelled it, God would stop perturbing the physical substrate (stop perturbing the model) and the model would "break" or fail to have the emergent properties that the physical substrate has; the dice would stop saying "I'm human".
Since there is no obvious way to refute this scenario, by giving it I have given a very strong argument against my claim.
Obviously I feel strongly about the issue, but only because the conclusion is certain for me. I have shown fairly rigorously (I didn't exactly spell it out) how a model is possible under which I am wrong (i.e. God) so I don't see what else you could expect of me: I've given the strongest possible case against my model.
My model is still right, I simply don't want to be distracted by noise. I think you see this is the case, or you would not have responded in this way. There is not really anything to argue about here. I guess we could agree to disagree if you don't see things this way.
Questions about the model are addressed with handwaving such as "use numbers and your head" or "this is obvious and everyone else is wrong"; no existing literature or prior work is cited; rather, thought experiments that completely evade the points raised by others are built out of thin air.
The core point is that qualia emerges out of perception, and therefore that any system that can perceive in a broad sense, even if that perception is merely simulation of perception, will effectively experience consciousness. But there isn't any reference to proven scientific models for any of the parts at play here; posts treat the core premises as ultimate truths and then spin around that.
Regarding your mention of your posting history and complaining of ad hominem, this is completely disingenuous given the way you have responded to other posters. If you wish a better response to your future posts, I suggest making your posts shorter, supporting your core points with references to existing work, and not use at all the "everyone but me is wrong" style of discourse.
if you meant to summarize my ultimate truth via "The core point is that qualia emerges out of perception, and therefore that any system that can perceive in a broad sense, even if that perception is merely simulation of perception, will effectively experience consciousness" then you didn't understand. The ultimate truth or core point is that qualia emerges as an emergent property of what the parts are doing, and if you simulate those parts with enough fidelity consciousness still emerges. Certainly lots of things could perceive without being conscious - a roomba vacuum cleaner for one thing.
So this argument about simulating parts -- it's like a fine mechanical watch that tells time: if you simulate it with enough precision the simulation can "tell time" only rather than telling time, the emergent property of the clockwork is "be conscious". you may not get my analogy and that's fine.
I'm not here to convince anyone. it's not worth discussing in my opinion. I've given a very explicit model under which I'm wrong. if you disagree with me that's fine.
No, I think it's worth discussing. I may not have understood you completely, but I for one appreciated reading your explanation. You may or may not be right, but I thought your exposition was coherent and worth reading.
Presumably you agree that you could replace all the muscles and bones of a bird and it would still fly, but you feel it is not obvious that you could replace all the neurons in a brain and it would still think. Why not? what's different? What's missing?
> Presumably you agree that you could replace all the muscles and bones of a bird and it would still fly
Well here's the thing, I think I have a problem with that premise in itself. Theoretically that idea seems easy to agree with. But in practice, has it ever been done? The answer is a resounding no.
Can we make things fly? Of course, but only by being insanely inefficient at it. Humans have never built something like a common swift, which can fly for 10 months without landing (http://phys.org/news/2016-10-ten-months-air.html).
Can we reach that point one day? Maybe yes, maybe no, I have no clue. But if we are to reach that point, we need to build sound formal models to get there.
Until we have such models for reasoning with things at that scale, I don't think claiming vehemently that these things are 100% known is intellectually honest.
The question isn't whether we can make an artificial bird now, but whether such a thing is possible in principle. I think logicallee's point is that any answer other than a clear "yes" would require the assumption that there is something fundamentally "unknowable" about the problem, and generally such objections are considered unscientific. Unscientific doesn't mean wrong, of course, but it does weaken the case.
So let's go back a step: can we theoretically replace a single bone in a single bird with without affecting its flight performance? I think the answer has to be yes. If so, is there a point at which we can no longer replace successive bones? While it might be a technical challenge, it's difficult to see how there could be a clear line between "flies" and "doesn't fly".
Can we replace an individual neuron in a human brain with an electronic equivalent without affecting the perception of pain? The argument is that if one can, what's to prevent replacing all the other neurons in succession? Are certain neurons special and unreplaceable? Again, it's difficult to see that there could be any clear line between "pain" and "no pain".
I think the strength and weakness of logicallee's argument is the assumption that it's possible to replace a single neuron with an electronic equivalent without causing any other changes to the system. If one can, it would seem straightforward to argue that machines can feel pain. I think the alternative would require assuming the existence of some "life force" for which we currently have no scientific basis.
But if we are to reach that point, we need to build sound formal models to get there.
To actually build such a system, probably, but are formal models necessary to reason about the consequences? Maybe, but thought experiments might help us build those models. With that in mind, do you think it's possible to functionally emulate any single neuron in silicon? If not, what's the missing entity?
I don't think claiming vehemently that these things are 100% known is intellectually honest.
Be gentle with accusations of intellectual honesty. Honesty does not require absolute correctness, only belief. I see no reason to believe that logicallee is overstating his confidence in his logic, which at its base is Occam's Razor. This may be a case where it is better to be "clear and wrong" than "fuzzy and arguably correct".
> I think logicallee's point is that any answer other than a clear "yes" would require the assumption that there is something fundamentally "unknowable" about the problem, and generally such objections are considered unscientific. Unscientific doesn't mean wrong, of course, but it does weaken the case.
My take on it is that anything other than a "we really don't know right now, being able to reason about this question is far beyond our current scientific models" requires a burden of proof that I have not seen matched anywhere (either in logicallee's posts, or in the writings of other "computable consciousness"-ists (don't know if there is a better word for them)). Testable, verifiable models would be what this particular burden of proof calls for for me to be satisfied with a clear "yes" or "no".
> Can we replace an individual neuron in a human brain with an electronic equivalent without affecting the perception of pain? The argument is that if one can, what's to prevent replacing all the other neurons in succession? Are certain neurons special and unreplaceable? Again, it's difficult to see that there could be any clear line between "pain" and "no pain".
Well, you can easily turn this around - can we remove an individual neuron in a human brain without affecting the perception of pain? We actually know the experimental answer to this one, and it is a resounding yes - in fact, a few of your neurons probably have naturally died since you started reading my post, and your perception of pain (or anything else) most likely hasn't changed. And there are medical cases of people losing significant brain mass and operating in society at a perfectly normal level.
This makes it quite hard to build a recursive argument around "if we can substitute one neuron with an electronic equivalent without changes, we can substitute all neurons with electronic equivalents without changes", because that argument would also work with "removing the neuron" instead of "replacing it with an electronic one" (and in fact, if removing a single neuron has no effect on the perception of pain, and replacing a neuron with an electronic one has no effect on the whole either, how does that help you in any way?).
The analogy with a bird's bones doesn't quite hold - there are specific bones that we can remove that will completely prevent a bird from flying, and others that will have no effect; modern science gives us a pretty good idea of how a bird's various bones fit in the "enabling flight" hierarchy. This clear and well defined "hierarchy of bones for flying" does not seem to be present in neurons.
We might yet establish a clear, systemic hierarchy of neurons one day that lets us explain consciousness rigorously, the same way that we can explain how the different bones and muscles of a bird let it fly! But we are far from it, and there is no certainty about the feasibility of such an endeavor.
The recursive (more correctly: inductive) argument is that there is no effect. It's like saying that no matter how many times you subtract 0 from a positive integer, the result will still be a positive integer - because subtracting 0 has no effect on the functioning of the system (the two are "the same" - not just similar but exactly the same). Subtracting epsilon (an arbitrarily small number) is similar but the recursive argument breaks - if you do it enough times it's no longer a positive integer.
The fact that your brain is already wired somewhat differently than since starting to read this comment doesn't have much to do with this argument.
To break the inductive argument and prove that mechanical birds are impossible, what you would have to show is that there is some bone or ligament or muscle or feather or something else, where if you replaced it with a plastic/mechanical/etc version with the exact same properties and behavior, the flying system w=could no longer function "the same". In other words, that there is some bone or muscle or other mechanical thing that even in hundred thousand years, science could not replace with a non-biological version without making some change to the behavior of the whole system - that it could never be made "the same" even in theory. (I say "in theory" because it is quite likely that we could never do so with a neuron - the whole switching back and forth example is very far-fetched.)
So it is only a thought experiment/inductive argument: not a practical statement about biology.
One thing that is interesting to point out is that no birds have the carrying capacity of modern planes, but we haven't reproduced mechanical birds that function the same way birds do. Science doesn't really have a strong incentive to try to exactly copy the mechanics of a pigeon with plastic components and plastic feathers and ligaments and some kind of muscles, and trying to get it to be composed exactly the same way. Other than a novelty, it doesn't really "get us" anything.
In perhaps the same way, science may not have any strong incentive to exactly copy humans into a virtual reality, setting up neural networks that are wired the exact same way, and feeding the virtual neural nets outputs exactly like a child would hear and see around them, etc. Even though very large server farms like Amazon's today have more memory and do more calculation than the human brains' neural connections imply (given our estimate of what actually does the computation), so that they're like "jets" and the human brain is like a "bird" -- still, there is no clear incentive to copy humans in a way that leaves the simulation conscious.
What does a conscious simulation get us? What does a mechanical pigeon get us?
So for this reason there is at least some reason to think that we are not about to build a conscious machine - we don't need it.
> you would have to show is that there is some bone or ligament or muscle or feather or something else, where if you replaced it with a plastic/mechanical/etc version with the exact same properties and behavior, the flying system w=could no longer function "the same"
Yes. This implies that you can measure the properties of the system you intend to recreate, and reproduce them at a high enough fidelity. The core of my rebuttal is that we might not be able to a) measure all the properties that matter and/or b) recreate them with enough fidelity for certain categories of problems. Bones, we're good. Neurons at extremely large scales? I wouldn't bet either way.
> Science doesn't really have a strong incentive to try to exactly copy the mechanics of a pigeon with plastic components and plastic feathers and ligaments and some kind of muscles, and trying to get it to be composed exactly the same way. Other than a novelty, it doesn't really "get us" anything.
It could get us drones that would fly 10 months without landing. Science doesn't really get much stronger of an incentive than "the military wants it".
'The stuff you said about the laws of physics and computation is a complete and utter red herring.'
You can't argue with math. There are functions that are simply not computable on Turing machines. The canonical example is the Halting function, that in finite time reports whether another function will halt on a given input. It is easy to show that this function cannot be computed.
There is no justification a priori for discarding the possibility that simulation of the substrate of qualia is outside of the realm of computable functions. Or to discard the hypothesis that qualia itself may be not Turing computable. Intriguingly, Cabessa and Siegelman (https://pdfs.semanticscholar.org/a112/4aca43b60b1e5777ae9a9c...) proved that recurrent neural networks with rational time evolving weights are super-Turing. Whether or not synaptic weights are actually rational is another story... but the point is you can't just state that it's possible to simulate qualia. We simply don't know.
On the other hand, since I (and you presumably) experience qualia, it's clear the physical substrate supports it and that it is in theory possible to create something that experiences qualia. And of course given our shared lineage and shared physical substrate it's ridiculous to assume that shrimp don't experience pain. You have to at least consider that they might have sensations; all of the ingredients are there. Not to mention animals even closer to us - it seems almost certain that your dog does experience and almost as richly as you too.
Finally - shrimp don't have a central nervous system. So chopping their head off probably doesn't block their sensation, whatever form that may take.
your discussion of non-computable functions is a distracting red herring. it derails the discussion.
Remember: when you talk about qualia not being computable, you are like someone saying "heavier than air flight is impossible", meanwhile birds are flying around merrily.... "well yeah but they're like biological, not metal." Not much of an argument for why flight is impossible.
So what. Qualia are an emergent property of the behavior of quite well-defined physical systems. They can be simulated. they don't have a connection with something that can't be simulated. end of discussion.
it's just a waste of our time to talk about it and distracts us from addressing what to do when someone makes a pain-sensing simulation, especially if that simulation has human neural structure and certainly if it reports having experiences. that is far-fetched today but will happen without any chance otherwise. It's simply impossible that it wouldn't.
when I was a child I told someone that checkers will "obviously" be solved completely (i.e. is it a win for black, light-colored, or a draw) -- since there were just so few possible states, it was well within what would be computed very shortly. They called me crazy, I gave an upper bounds on number of states, and if you look at the math and compare it to the computation that was available it was simply impossible that it wouldn't be solved. it was brute-forced, and we know the outcome for perfect play. (i.e. I was "right".)
simulation of a whole brain that reports qualia (the same as you do or I do) is obviously within what will be computed. It's simply something that will be done. I don't want to waste a second of my life entertaining any red herrings to the contrary.
Your point about shrimp and central nervous systems is well-placed, I missed that. Here is an article on Wikipedia that discusses this in detail:
That is a good article and mentions many of the points. However, on the balance I would be inclined to think that if something clearly behaves and indeed communicates as though it were feeling pain, then we should assign a probability that it likely does experience it, given that evolutionarily our qualia (which every person has an intuitive grasp of, since they experience it) has the same structural source. (I mean the exact wiring.)
While today's reality characters might also "behave as though" they experience qualia (I mean flailing around on screen and so forth), they do not have the same structural evolutionary source (no neural net or anything), so for this reason I would not assign the idea of pain to them with certainty today.
later it will be problematic (if there's machine learning for getting video game characters to flail around in pain, realistically), but that's not the case today. The end-game however is very problematic, as it will for sure be human.
there is no question about this. We have the complete source code!!! Like, absolutely, we have the complete source code for how humans are wired (in the form of DNA, which has been sequenced), and in terms of emulating it with some simplifications (instead of on a molecular/atomic level) we know that the end-game is just not that many cells in the human body or the brain.
the conclusion is set in stone. it's a waste of time to even talk about anything else. the question is what this means for us and how we deal with this when it happens. What do we do if someone makes a neural net and reports that they have created pain? It will be a huge problem and issue.
There is a problem with this argument: numbers are abstract entities. A number never really exists, only a physical system that a particular number models. If I write down 123, it is not the case that 123 now exists, only that the pixels on your screen now form a pattern that can be modeled by the abstract concept denoted by the symbol "123".
Now, pain clearly does exist. But pain is a process, and processes don't exist unless they actually happen. A simulation of a process is itself a process, but it is not identical with the process that it is simulating, it is a model of that process. A simulation of pain might have enough fidelity that it would be reasonable to call it "real pain" that is felt by some actual entity with subjective experiences as real as our own, but it's far from a slam-dunk.
In particular, the subjective experience of pain involves more than just the stimulation of pain receptors and the registration of those signals in the brain. General anesthesia eliminates the subjective experience of pain, but it doesn't do this by shutting down pain receptors, it does it by shutting down consciousness. Consciousness is an essential component of the subjective experience of pain.
Now, consciousness is a continuum, not a dichotomy, and it's possible that a shrimp has some level of consciousness and hence can feel pain. But agsin, it's far from a slam-dunk, and your argument certainly doesn't settle the matter one way or the other.
>your argument certainly doesn't settle the matter one way or the othe
I consider the matter settled and don't wish to engage in dialogue around it. For me there is nothing to discuss and it is a waste of resources to do so. Obviously others could disagree with me.
>A simulation of pain might have enough fidelity that it would be reasonable to call it "real pain" that is felt by some actual entity with subjective experiences as real as our own, but it's far from a slam-dunk.
I consider this to be a slam-dunk, and the reason why is an inductive process of replacing one cell at a time with one that interfaces with the other cells the same way (as a black box) but simulates the inner workings of that individual cell rather than perform them biochemically: obviously consciousness does not occur within a single neuron or nerve cell, so that if we replace that neuron with a black box that has the same interface with other cells, but internally is a simulaion, then consciousness would cease. This is clearly not the case. Clearly if it has the same interface with the other cells then it will still have consciousness, and by a system of induction you can come to the infallible conclusion that it is possible to have a process of real pain that is entirely simulated.
It is like the inductive reasoning why it must be possible to build an artificial object that flies: if you replaced any single bone or ligament in a bird with a mechanical version with the same properties, it must still be able to fly, therefore an entirely mechanical object must be theoretically posisble which is capable of flight.
In practice flight isn't done this way, and no bird is copied on a muscular level to achieve flight. It just shows that it's certainly and irrefutably possible.
This inductive argument shows that it is irrefutably possible (for me). I consider it a slam-dunk case, beyond any need for discussion, yes.
One thing that you do not talk about in your comment is under the scenario where pain is simulated with enough fideily (on a neural level) to be actual pain, how this changes if the simulation is deterministic. In what sense is its calculation then a "process", since we do not have any influence over its outcome?
==> I do have a question for you: does the nonillionth prime exist in your opinion? Does it have a specific well-defined value?
> I consider the matter settled and don't wish to engage in dialogue around it.
And yet here you are.
> I consider this to be a slam-dunk, and the reason why is an inductive process of replacing one cell at a time with one that interfaces with the other cells the same way
All that argument shows is that an embodied simulation of a human brain would feel pain (and BTW, I agree with you about that). But a shrimp brain is very different from a human brain. In fact, it's a bit of a stretch to say that a shrimp even has a brain.
>All that argument shows is that an embodied simulation of a human brain would feel pain (and BTW, I agree with you about that)
sorry, that's what most of my comments were about. the shrimp thing is less interesting to me, because I have a subjective knowledge that humans perceive pain. for me, I will give shrimp the benefit of the doubt due to our shared evolutionary background and their behavior.
I skimmed your link and found it murky - it didn't define "exist" to my satisfactin. with that said, I didn't define exist either and I don't now what I had in mind when I asked you if the nonillionth prime exists. I don't think I asked a well-defined quesiton, I'm withdrawing it.
Before I start reading this comment (and I'm excited to!), let me admit that I know what qualia is, and obviously this opens up a whole new can of worms about what pain is, whether our conception of pain matches anything a shrimp is able to feel, whether these two things match on any reasonable scale magnitude-wise, ...
Those same predators could very possibly be raping mates, or abandoning its unhealthy children to die. But that doesn't mean those are things that we (as advanced, intelligent, and capable animals) should do. We have higher moral standards than that.
It is not that easy, but yes. The fluffy dry mass (the tinder) is very important - it is the thing that catches fire at low temperature. See his pump drill video for some of the pain points.
I was impressed with his spear thrower demonstration, I was pretty convinced that it would have done for me if I had been in the way. Hunting seems hard even with a gun, and long, and I suspect that scenes of a pig with a spear in it screaming and thrashing around would not do a youtubers profile much good at all.
It'd also be a good way to get yourself killed or at least severely messed up by tusks. There is a reason medieval boar-hunting spears are built with cross-trees below the spear point.
I don't have a problem. I've watched his videos for a long time and it is fairly clear that he was ignoring the key element of primitive life: hunting. And in his shrimp trap video, he finally broached it. Now, if he brought the whole thing to completion, he would hunt something properly.
Alabama has legal spear hunting, and it would probably be the likeliest place to host stone-tipped spear hunting with a film crew.
This guy's hobby is building stuff with stone age technology and sharing videos with us. He goes out to his plot on the weekends, then drives home and goes to work for a week. He doesn't need to hunt to keep food on the table.
If he went hunting instead, he probably wouldn't have time to build huts and farm plots. I doubt he'd have much of an audience if his videos were about stalking rabbits in the bushes with a spear in his hand.
In most parts of the world, hunting with stone age implements would be illegal and at the very least it's (arguably) unethical because it would put the prey animals through unnecessary suffering.
Yeah, hunting was an important part of stone age hunter/gatherer culture but this guy isn't there trying to survive with stone age tools. I'm sure there are hunting videos in youtube if someone enjoys watching that instead.
It would be a fairly unlikely place for him to go though, given that everything he does is on his own land (and other areas where he has the owner's permission). He's presumably constrained by the local wildlife and regional laws.
In Queensland, Australia where he is based, it's illegal [1] to hunt anything which isn't feral or an invasive species of hare or deer, which are pretty hard to find, especially on small plots like the one he owns.
There are feral pigs in QLD, which have their own hunting culture. We were give a copy of Bacon Busters magazine: http://www.sportingshootermag.com.au/news/new-bacon-busters-.... In it we learned that real men hunt pigs, real real men hunt pigs with a bow, and real real real men hunt pigs with a knife (joking, of course). Kevlar armour can be purchased for your pig dogs (chest and throat). And when you fell your pig, take a photo with a stick propping open it's mouth... unless you have an empty beer can to do the job.
But yes, while there are feral pigs in QLD, it's a bit much to expect him to hunt them with a stone-tipped spear just to appease some random naysayer on the internet. I mean, the guy builds a tiled-roof house with chimney and underfloor heating using nothing but the stuff he pulls from the forest by hand, and somehow that's not enough - apparently he has to risk his life attacking a feral pig with stone tools to get 'cred'...
He's not in Alabama, he's in Australia. Agree with it or not, hunting is highly restricted there. I'd rather him keep making videos than be fined or incarcerated out of existence.
Because before the computer you wrote this comment on existed, at some point in early history the original hardware hackers were figuring out how to hack mother nature and their environment to first keep themselves alive, and then to thrive enough to allow them to build up to where we are now.
The evolution of technology is relevant to all of us here. It is utterly humbling and fascinating to try and imagine a time when this was bleeding edge technology that was saving lives from hunger. But some random cave person somewhere came up with this with a much smaller brain than yours.
What he is doing is not entirely different from building a computer from first principles... He's just doing it for "primitive technology" and honoring the real original neck beard (literally) who was nerdy enough to come up with this concept.
Did you watch the video? This guy makes tools using just the resources from the woods and his own body. Interesting for anyone with a passion for engineering.
This is his weakest video to date, though. If you weren't familiar with his body of work, I can see how you'd be underwhelmed. It's informative as usual, but it's less interesting than all the others - I mean, right after he builds his own smelter from scratch (and inventing his own airflow tool) and smelts iron from riverbank bacteria, a simple cane trap is a bit anticlimactic...
Hacking is not the same as computing or programming.
The ways that the Primitive Technology Guy (PTG) has figured out how to make clay roofing tiles, charcoal, and a primitive forge blower are pretty fascinating. He's managed to build a pretty amazing little house with all of that.
That PTG has managed all of these with literally nothing that he could not source from his local environment is quite impressive.
No knife, no tools, no matches, just a pair of shorts and an informed mind -- that's pretty much hacking distilled down to its bare essentials.
I wonder if there are any companies who spot this sort of content with the aim of getting in on tv screens.