Can't say for DRM, but there's much bigger demand to play a multiplayer action game without experiencing cheating than demand for a similar game that's not a rootkit. Cheaters are nasty. Devs make rootkit anti-cheats simply because there's no better alternative, not because they're evil.
The plant is happy to give nectar with sugar to pollinators. That means they are ok with being eaten. They are likely not ok with being damaged.
So go ahead and take the apple because it will drop otherwise. And take the fruits. If you want to go all out, go with a Jain diet system where carrots and mushrooms are not ok but mints and herbs are a-ok.
In my opinion, “I'ma eat you, coz I am more entitled to live that you are” is unethical. But then again, I also chose a plantbased diet.
It is impossible for more complex organism to live on Earth and not feast on other organisms, so you could say I should avoid eating plants too. Correct, but since I cannot avoid killing other beings, I have chosen the path of less overall suffering.
“I'ma eat you, coz I am more entitled to live that you are” is the default position for every animal out there on earth, fwiw. Would you call a harbor seal pure evil because they play with their prey and kill them brutally and slowly? Fish like Tuna are brutal killers and even kill eachother. When you study it, nature is brutal, unsavory, cold, and cruel. A human pig farmer on the other hand is remarkably benevolent a predator. They will hire a veterinarian if a pig is sick after all. They won't rip the pig apart alive limb for limb. They won't slowly kill it over the course of days.
I would not call animals who kill other animals for evil. They don't have a choice. It is their nature.
Humans, by virtue of their empathy (which can be extended beyond their own species) and intelligence (however we decide to evaluate that), do have a choice.
What is suffering and how do you measure it? How do you know that plants don't suffer? Why is less suffering more ethical?
I don't believe you've actually thought any of this through in an intellectually rigorous way. Your choices just allow you to falsely believe that you're somehow superior to people with other priorities and values.
I perceive suffering as unnecessary pain and/or fear. I can certainly know when I suffer myself. And while it's true that I cannot directly measure the pain or fear of other organisms, to some extend I can extrapolate my own experience to that of other beings. Empathy. First and foremost “higher” animals. E.g., if a dog is screaming when somebody treads its tail, I lend from my own experience when somebody steps on my toe and conclude that the dog is screaming in pain. I could be wrong, of course, but that's how the reasoning goes. And don't we all behave based on our conscious or unconscious evaluations?
The further away, phylogenetically, we come from the human species, the more difficult it is for me to assess if a being is suffering (i.e., experiencing unnecessary pain or suffering), but based on my observations of plants, and my knowledge of their anatomy and physiology, I have concluded that yes, they may feel something — even pain — but it doesn't look much like they do. But I could still be wrong. It's still how my reasoning goes. If one day I am faced with more tangible evidence, then I will obviously have to evaluate my behaviour. Until then I choose what seems to be the road of less suffering.
40+ years ago I worked for several months at a “chicken farm” where chicks were raised from “Easter chickens” to fullgrown chickens in roosts of 10_000 chickens each. It was a waking nightmare. It is _the_ most horrible experience I have had in my life because of the obvious and extensive suffering of the animals, and the experience made me choose to become a vegetarian because I didn't want to contribute to that specific suffering. Later evidence has shown that pigs and cattle and other farmed animals are experiencing suffering too because of the way humans are raising them. I simply don't want to be a part of that. Period.
My choice of diet doesn't change my value as a human being even one iota (the same way the value of any other human being isn't measured by their choice of diet). I could be very very wrong about the whole thing, and in that way making things unnecessarily complicated for myself (and my fellow human beings), but I cannot live on Earth without making decisions, and I try to make the best I can — just like all other humans do. All other humans could be right, for all I know. I could be the fool here. Who am I to judge.
Eating meat doesn't mean the consumed thing suffered. Factory farming might. The farming and slaughter process may be more efficient when causing suffering, but it doesn't have to.
I find a more compelling argument that it is generally awful to end something's joyful existence and experience. But apparently not compelling enough to dramatically alter my diet much. But enough to cause me to save/relocate some bugs in some cases rather than kill them.
That poster seems to have thought things through. A first hand account of an industry of suffering, and our best evidence on how plants work and a well reasoned conclusion.
I think it is a huge reach to assume the lettuce head isn't suffering when you pluck it. It is pure anthropomorphising at its finest where you can use some in built cognitive dissonance to see a plant as some other not worthy of life whereas something that bleeds red fills one with dread. I hazard a guess that if most people didn't spend a childhood ripping leaves off trees and throwing sticks at plants that they would have similar feelings towards the harvest process of most crops when exposed to the first time like they might with the harvest process for animal crops.
If there are opportunities to do things with more respect and dignity, for either plants or animals, they should be done. But at the same time we shouldn't feel so morose about the reality of the situation of life on this planet where literally everything eats something else, usually in a very brutal manner.
Game theory is not in the peoples' favor. Consider then every other company's game theory: 'go out of business unless... (we stop this research before it is too late and a cure is developed)' which can lead to some dark places.
Also: companies do not think like a person thinks. Insolation from consequences, diffuse responsibility, and other mechanisms come into play here.
not who you are replying to - but:
Shakespeare was limited by the number of interactions he had with humans. He did not have the internet.
We also have neurology as a science now. So that's one bit of evidence for the claim.
Of course Shakespeare had a profound understanding of human nature. And of course he did not have the working vocabulary and knowledge base of modern psychology which has been built up over time by many humans working together. Two things can be true.
> not who you are replying to - but: Shakespeare was limited by the number of interactions he had with humans. He did not have the internet.
The internet may increase the number of interactions, but decreases their quality.
Looking at most online interactions it looks to me that people show less empathy and understanding than they do IRL.
> Of course Shakespeare had a profound understanding of human nature. And of course he did not have the working vocabulary and knowledge base of modern psychology which has been built up over time by many humans working together. Two things can be true.
Does that help write better books. If the claim was true the best fiction would be written by psychologists and neurologists. Is it?
I think that knowledge is on the wrong side (for writing fiction) of Chesterton's distinction (in a work of fiction - I cannot remember which Fr Brown story) between understanding someone from the inside with empathy and from the outside with analysis.
Hard disagree. I made an analogy in another reply to Monet not knowing quantum physics. Lacking that information didn’t deter his famous explorations of light effects.
Humans are great at figuring out how things behave before we have a great model of why they do it. And by Shakespeare’s time, we had a pretty good grasp on practical human psychology, even if we had less understanding of the mechanisms behind it.
We are excellent at figuring things out. We get better at it over time, as we grow our collective knowledge base.
I agree that we are great at figuring out things before we have a model of why things work. And we have a mountain of context to work from already. Shakespeare wasn't an idiot, and he wasn't in a vacuum but his 'context window' was 'smaller' than a lot of people today(quotes because dubious terminology).
Art is improving. Science is improving. human understanding, and communication of that understanding is improving too. That is my point.
I won't argue against that at all. Someday we're going to have someone with Shakespeare's brilliance and modern foundations, and I can't wait to see what that looks like.
(And maybe we already do have that person. I'm shamefully out of date with modern literature.)
> Of course Shakespeare had a profound understanding of human nature. And of course he did not have the working vocabulary and knowledge base of modern psychology which has been built up over time by many humans working together. Two things can be true.
Yes, but this doesn't prove that a modern author could produce a better text than a historically great author, which was the original line of thought. Or is there a specific modern text that you have in mind that proves the point?
Then every text book is an example of this measure of better improving over time...
How about 'more representative of the human experience'... (or enjoy/like more)
Then we measure how well a human relates to a book: which is taste, a subjective quality that is notoriously hard to measure in any meaningful way. This measure becomes not a single measure, but a collective measure against the sum of humans who interact with it: an untenable standard - and biased towards the present anyway - which doesn't give charity to your position.
...So how do you measure a book to be 'better'? That's the neat part: you don't. You can measure what you 'like' more, you can measure 'features'... but we probably won't even agree on what makes a book 'better'. We like what we like, and most of us have a hard time even explaining why we like something.
Decoupling working from living: means only intrinsically valuable things get worked on. No more working a 9-5 at a scam call center or figuring out how to make people click on ads. There is ONLY BENEFIT (to everyone) from giving labor such leverage.
Not every job needs to or should even exist: everyone having a job isn't utopia. Utopia is being free to choose what you work on. This directs market value for labor to go up. Work that needs to get done will be aligned with financial incentives (farmers, janitors, repair industries would soar to new heights).
UBI is a necessary and great idea: A bottom floor to capitalism means we all can stand up and lift this sinking ship.
The complaint of developers "being reliant on stack traces" tone sounds an awful lot like: "real programmers just use punch cards and ignore any tooling we've come up with in the last 70 years: just use your brain"
I speak in hyperbole here. I only point out the giants whos shoulders we stand on left us fantastic tools to use.
There is something to the philosophy of keeping your interface 'dark' such that you learn to 'shine'. But that's something I ascribe to a personal development journey.
A good argument. however to compare an AI model to a Xerox machine is reductive and not a sound metaphor...
It can not be treated just as a Xerox machine, but it can be treated as a Xerox machine that has within it all the copywritten works (that a user can inventively request combinations there within) which it has trained on (and saved in the form of weights/bias). In this case the AI model itself is the distribution of works under copyright. Encrypting/transforming copywritten works and transmitting it is a violation of copyright (afaik; ianal).
This is all to say, copyright - as it stands - needs heavy reform. I'm rather copyleft. Because all of this is vestigial nonsense from an age where printers from the 1800's setting the rules, and our thinking hasn't updated yet.
Only distributing it to others is when copyright is an issue. Private translations/transformations are unenforceable. You can mark up a book you've bought as much as you want.
Complex learning behavior is far lower than a neuron. Chemical chains inside cells 'learn' according to stimuli. Learning how to replicate systems that have chemistry is going to be hard, we haven't come close to doing so. Even the achievement of recording the neural mappings of a dead rat capture the map, but not the traffic. More likely we'll develop machine-brain interfaces before machine self-awareness/sentience.
I think this comes down to whether the chemistry is providing some kind of deep value or is just being used by evolution to produce a version of generic stochastic behavior that could be trivially reproduced on silicon. My intuition is the latter- it would be a surprising coincidence if some complicated electro-chemical reaction behavior provided an essential building block for human intelligence that would otherwise be impossible.
But, from a best-of-all-possible-worlds perspective, surprising coincidences that are necessary to observe coincidences and label them as surprising aren't crazy. At least not more crazy than the fact that slightly adjusted physical constants would prevent the universe from existing.
> My intuition is the latter- it would be a surprising coincidence if some complicated electro-chemical reaction behavior provided an essential building block for human intelligence that would otherwise be impossible.
Well, I wouldn't say impossible: just that BMI's are probably first. Then probably wetware/bio-hardware sentience, before silicon sentience happens.
My point is the mechanisms for sentience/consciousness/experience are not well understood. I would suspect the electro-chemical reactions inside every cell to be critical to replicating those cells functions.
You would never try to replicate a car never looking under the hood! You might make something that looks like a car, seems to act like a car, but has a drastically simpler engine (hamsters on wheels), and have designs that support that bad architecture (like making the car lighter) with unforeseen consequences (the car flips in a light breeze). The metaphor transfers nicely to machine intelligence: I think.
If not this exact paper, This kind of memetic attack likely exists out in the wild. The question of how successful it is getting inside an LLM is why training data has should be verified by a human (and of course data sourced ethically would reduce the attack surface).
In all seriousness, DRM/anti-cheats => rootkits/rats. Don't fall for it. Demand better.
reply