Hacker News new | past | comments | ask | show | jobs | submit login

i think that before a science can be "a science" with powerful theories and universal laws, there needs to be a long period of existing as a proto-science where people aren't doing experiments and are just observing and describing.

before darwin, you had to have linneaus just describing and cataloging animals.; before {astronomy theory guy}, you had to have {people just tracking and observing stars}.

psychology may have tried to jump the gun a bit by attempting to become theoretical before there were a few generations of folks sitting around quantifying and classifying human behavior.

this was definitely true in cognitive neuroscience. once folks got their hands on fMRI, this entire genre of research popped up that was "replicate an existing psychology study in the scanner to confirm that they used their brain". imo, a lot more was learned by groups that stepped back from theory and just started collecting data and discovering "resting state networks" in the brain.






I suspect that after 400 years of the scientific method, that we may be reaching the limits of single variable experiments in a number of fields. Statistical methods can find those patterns, and as we advance in those areas I expect us to advance in messy sciences like psychology. We’ll be able to more reliably look at people or other chaotic systems and see how three inputs work together to create a single effect.

The math for multivariate experiments has been well understood and applied since the early 20th century.

Modern industry would not exist without it.

The problem with psychology experiments is that the mind has many hidden variables which cannot be easily accounted for.


Moreover hard science is more or less based on at least three things: objectivity, quantifiability, and empiricism. We suppose that objective phenomena exist and can be independently verified and confirmed by other observers, using quantifiable measures to agree upon exactly what it is we are observing when conducting an experiment.

The issue is that the mind is by nature subjective and wholly private, inaccessible to outside observers. Further, whereas when conducting an experiment in physics where it becomes possible to separate the experimenter from the experiment (to an extent, observer effect and quantum mechanics notwithstanding), as regards matters of the mind the observer and the observed are one and the same of necessity; you are in fact part of the experiment yourself.

> The problem with psychology experiments is that the mind has many hidden variables which cannot be easily accounted for.

This is where the pillar of quantifiability breaks down, and barring advances in techniques for inspecting the brain with greater spatial and temporal resolution it's hard to see how one can quantify what cannot be directly observed.

Empiricism is the only aspect of the three aforementioned that can still hold up within this domain, and there is a large centuries' long tradition of studying the mind subjectively, qualitatively (as opposed to quantitatively), and yet in a still empirical fashion with falsifiable hypotheses; various mystical and meditative practices exist in this space, such as the fire kasina practice, especially as formulated by a trauma physician [0] complete with falsifiable hypotheses and steps for independent reproduction. What distinguishes practices in this space from harder sciences is that the phenomena observed are subjective, not "out in the physical world," and there does not exist any instrumentation for measuring them apart from that of your own faculties of perception.

As an aside I found the comparison to shamans and spiritual teachers in TFA interesting, since I have always considered psychology and spirituality to concern themselves with the same subject matter and problem domain of minds; one could say that they are both proto-sciences of mind, and psychology is the latest iteration of this tradition (though as TFA states somewhat misguided and out of touch with its roots in mysticism; on this latter point see Jung's Psychology and Alchemy for a discussion of where analytical psychology connects with the older esoteric traditions).

[0] https://firekasina.org/wp-content/uploads/2017/11/the-fire-k...


This is a misunderstanding. The the mind is not wholly private and subjective, its objectively quantifiable. Dualism ended a long time ago, and psychology is a materialistic science.

> The issue is that the mind is by nature subjective and wholly private, inaccessible to outside observers.

Not by nature, it's not. Unless you define it to have some immeasurable spiritual quality to it which is obviously not experimentally discoverable and so of little use discussing here.

> This is where the pillar of quantifiability breaks down, and barring advances in techniques for inspecting the brain with greater spatial and temporal resolution it's hard to see how one can quantify what cannot be directly observed.

Pretty much every experiment has a massive amount of relevant states which we cannot quantify.

There's a whole lot of quarks in 1 kg of steel but we don't need to know all of their states to measure macro quantities like its temperature and strength.

The mind has proven very reluctant to this sort of useful and measurable macro properties.

It's a typical case of a chaotic system. Perhaps innovations in the modeling of such complex systems (not too different from the advancements we're seeing in ML) will be the key to better insights into the mind.


> Not by nature, it's not. Unless you define it to have some immeasurable spiritual quality to it which is obviously not experimentally discoverable and so of little use discussing here.

There is no need to bring in "spirituality" (at least, as commonly and colloquially understood) or "souls" into the picture - see for instance qualia, whose existence is self-evident, and yet are also not amenable to examination by external observers. Irrespective of how much you can pick and probe at the brain and measure wavelengths of light, there is "something it is like" to actually experience the phenomenon of "red", and this experience itself is not readily accessible to objective/quantitative methods.

> It's a typical case of a chaotic system. Perhaps innovations in the modeling of such complex systems (not too different from the advancements we're seeing in ML) will be the key to better insights into the mind.

I could be convinced that minds are ultimately emergent phenomena of plain physical and mechanical processes which are too frighteningly complicated for us to analyze with contemporary methods.


> is not readily accessible to objective/quantitative methods.

Just because the mind is not readily quantifiable with current technology doesn't mean that it's subjective "by nature".

> for instance qualia, whose existence is self-evident,

No, it isn't and I see no way to test for "qualia" so, applying Newton's flaming laser sword, it's not worthy of debate.

It's a very dangerous thing to draw conclusions about empirical phenomenon from metaphysics. I suggest you stick to the scientific method when trying to understand the empirical world, it has been far more successful than philosophical rambling.


> No, it isn't and I see no way to test for "qualia"

You are verifying the existence of qualia every instant of your existence. It's the most immediately apparent empirical fact conceivable, since it is sensory experience itself, and you are testing its presence by the mere fact of being alive (philosophical zombies [0] notwithstanding).

> Before she left her room, she only knew the objective, physical basis of those subjective qualities, their causes and effects, and various relations of similarity and difference. She had no knowledge of the subjective qualities in themselves. [0]

I suggest you confirm the definitions and senses of terms you criticize before being so dismissive of them.

[0] https://plato.stanford.edu/entries/qualia/#Irreducible


The experience of qualia doesn't define any test. By what metrics would we even test it against? Are qualia orderable in any way? Do they have weight, a size, or substance? Do processes of material 'exist' physically in any meaningful sense?

We have not devised a test for others' qualia, and the only evidence we have of qualia is our own experience, which is a model of objective reality and not reality itself.

Bringing up philosophical zombies: we can't know if everyone inhabits their own universe, independent of each other, filled with zombie replicas. From anyone's perspective: that scenario would be 100% indistinguishable from what Occom's razor suggests. It therefore follows the qualia do not exert any sort of physical presence.

There are nonphysical things that we consider to exist. Numbers being the prime example (har de har har). Numbers do not physically exist, but are a property we impose on various groups (of which we distinguish in our own mind). Similar processes like experience do not 'physically exist' but is a property of physical existence (of which, we politely assume of other people to possess, and not call them zombies).


This may be true, but what does it have to with psychology as a science? How psychology deals with meta cognition (thinking about thinking) is correlate it with objective measures related to the phenomenon, (performance accuracy, changes in BOLD response in task fMRI. We don't care if its qualia, or just reports of experience. that is a different debate and field entirely.

There's a feedback loop between technology and science. Without progress in science there can't be progress in technology. But slso without progress in technology there can't be progress in science.

In applied science they call this the ladder. It’s the same reason some people are desperate to keep certain sorts of manufacturing in country. Once you lose that mutualism you can’t just spin up a new factory. You have to do what every second world country on the road to being a first world manufacturing superpower has done: you make cheap shit until you can make mediocre products until you can make good ones until you can make high end items.

partially disagree with this, every proto-science historically had a bunch of wrong but highly sophisticated theories. medicine, alchemy (as mentioned in the article), physics, biology (Aristotle), astronomy. for some reason it seems you need the wrong theories to organize the empirical data.

I actually think Freud’s elaborate mental structures have some of this feeling to them.


  > for some reason it seems you need the wrong theories to organize the empirical data
There's a somewhat well known article on this by Isaac Asimov: the Relativity of Wrong

The scientific process is really misunderstood. People think you use it to find truth, but actually you use it to reject falsehoods. The consequence of this is that you narrow in on the truth so your goals look identical, but the distinction does matter (at least if you want to understand why that happens and why it's okay that science is wrong many times -- in fact, it's always wrong, but it gets less wrong (I'm certain there's a connection to that website and this well known saying).

He's well known for his Sci-Fi but he got a PhD in chemistry, taught himself astrophysics, and even published in the area. He even had written physics texts. I found Understanding Physics quite enjoyable when I was younger but yeah, it isn't the same level of complexity I saw while getting my degree, but it's not aimed at University students.

Anyways, I'm just saying, he's speaking as an insider and I do think this is something a lot more people should read.

https://hermiene.net/essays-trans/relativity_of_wrong.html

I believe there's a copy of Understanding Physics here but currently offline: https://archive.org/details/asimov-understanding-physics


i think that astronomy/physics/gravitation is actually a pretty weird case in the history of science and most things don’t go that cleanly.

better examples might be medicine, where people just bounced around from one insane wrong theoretical system to another (humors to “empiricism” to bad air to Paracelsus to Avicenna and round and round and round). but somehow progress happened anyway. actual steady scientific progress only took off in the 19th century.

or chemistry, where alchemical theories were also completely bizarre, mostly mysticism and poetry. but despite being “not even wrong”, people following them became pretty good at laboratory skills

After many centuries, this laid the ground for empirical chemistry, and after a few more centuries, a theoretical system emerged that is close to right. But there was a lot of progress even under the “not even wrong” theories.


I'm not sure this is quite right and really is counter to the "less wrong" progress. I know physics history best so that's what I used.

My history of chemistry is a bit better than medicine, but I think you have to be careful with chemistry because much of the mysticism was coded language to guard their secrets. Maybe it shouldn't be as surprising that chemists are still a bit more secretive of their work than say physicists. And also we need to make sure we disassociate modern alchemy from classic alchemy, just like we would need to do with astrology with respect to astronomy. When there is a branching and one branch has the legitimate work it should be no surprise that the other branch becomes even crazier.

We also have the benefit of distance with physics and astronomy because not nearly as much was recorded, especially with respect to what everyday people believed. There were far fewer professionals in those fields (almost exclusively the rich, because you had to be rich to spend your time studying).

For medicine, I think you are being a bit too critical. Yes, "bad air" (miasma) was a bad theory, but did that theory lead to improved outcomes? Yes! While the underlying mechanism was incorrect, but that model caused people to clean up the cities to smell better and I'm sure you can see how that would result in decreased illnesses. It caused hospitals to be well ventilated, and in some cases it even caused doctors to clean their hands (when they smelled). Many parts of the belief were testable and could be confirmed through experiments because smell is a confounding variable. It led to some doctors wearing masks because they wanted to put better smelling things in front of their faces and well... the mask helped even if the herbs didn't (some even did). But it is also true that there were falsifiable experiments, which people did perform, that did not fit the model. This is *exactly* what led to people discovering germ theory. There had to be an explanation for why those failed, right?

But it would be naive to ignore the fact that the reason this belief persisted for so long was not just due to lack of scientific experimentation or lack of observation (both happened) because odor is caused by particles. The problem is that odor particles are way smaller than germs and so you get a ton of false positives and that not all germs are associated with things that smell. But the two definitely correlate in many ways, including being able to pass through air and liquids.

You question "how progress happened anyways?" Well it happened because they were not too far off course. It is because our view of history is often very limited and for most people it is a brief summary of what we know, so we do not see how many people were pushing back against the status quo. But I promise you, it wasn't just Semmelweis vs the world (no matter how poorly he was mistreated). Our view of history is warped in ways that both over exaggerate things as well as gravely under represent. These are not in contention, they are just different forms of noisy and biased information.

So I do not think medicine or chemistry (which we could discuss similarly. And I will even mention that lead can be transmuted to gold, but to do that requires an understanding that wouldn't be discovered till Dalton and the actual capability to do so wouldn't until very recently) run counter to my point. I do also encourage you to read the wiki on the history of germ theory (it even quotes Martin Luther discussing washing his hands, a full 300 years before Semmelweis). While it is concise, I think you can see that there was lots of pushback to disease just being from bad air well back into history. Like many summaries, it captures the main events but misses much of the smaller progress in between. It shouldn't be surprising as you might guess that would require far more text.

  https://en.wikipedia.org/wiki/Germ_theory_of_disease

i think we kind of agree with each other. doctors based on experience found things that worked better at treating patients. and they also had completely wrong theoretical models to rationalize these things.

i don't really see monotonic progress: e.g. going from Avicenna to Paracelsus seems like a lateral move at best as far as scientific knowledge (and both theoretical systems stuck around competing for hundreds more years), even though Paracelsus and his followers did make some important contributions to medicine in practice and medicine improved over that period.

so you had practical progress for a long time but scientific knowledge, meaning theories of nature, stayed equally bad.


  > doctors based on experience found things that worked better at treating patients. and they also had completely wrong theoretical models to rationalize these things.
And in truth this is a lot of what Asimov was talking about. Because it is how accurate the theoretical model is.

I'll often say that "truth doesn't exist" but I'm careful about my context. Truth doesn't exist because it requires infinite information. But that doesn't mean we can't trend towards it. This statement is no different than "all models are wrong, but some models are useful."

This is why I talk about science not actually being able to prove things, but rather about ruling things out. That's what is important. The model is always wrong but is it improving? If it is having a better total outcome and able to explain more than the previous model, then yes. That's the thing. It has to be able to explain what the last model does, and more. If it can't do both those things, then it isn't progress (yet at least).

And I think our only distinction of scientific progress here is that you might need to consider that this is a high dimensional problem, with many basis vectors. What science is, is what is observable and testable. I jokingly call physics "the subset of mathematics that reflects the observable world." (This is also a slight at String Theory...)


Astronomy: you might go with Galileo Galilei, and you wouldn't be too wrong. [0]

[0]: https://en.wikipedia.org/wiki/Galileo_Galilei


  > there needs to be a long period of existing as a proto-science where people aren't doing experiments and are just observing and describing.
I think you misunderstand science.

  > before darwin
And this strengthens my confidence.

There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.

There were great contributions to astronomy long before Kepler. There were many experiments that influenced the whole field. There was a lot of important chemistry that happened long before Lavoisier (conservation of mass) and Dalton (atomic model).

The proto-sciences are nothing to scoff at. They aren't useless and they weren't ill-founded. They were just... noisy (and science is naturally a noisy process, so I mean *NOISY*). There's nothing inheriently wrong with that. The only thing wrong is not recognizing the noise and placing unfounded confidence in results. That famous conversation between Dyson and Fermi discussing von Neumann's elephant wasn't saying that Dyson didn't do hard work or that the work he did had no utility, it was that you can't place confidence in a model derived from empirical results without a strong underlying theory. You'd never get to that if you only observed because you'd only end up making the same error Dyson did.

Science, in its nature, is not about answers, it is about confidence in a model that approximates answers. These two things look identical but truth is unobtainable, there is always an epsilon bound. So it is about that epsilon! Your confidence! So experiments that don't yield high confidence results aren't useless, but they are rather just the beginning. They give direction to explore. Because hey, if I'm looking for elephants I'd rather start looking where someone says they saw a big crazy monster than randomly pick a point on the globe. But I'm also not going to claim elephants exist just because I heard someone talking about something vaguely matching the description. And this is naturally how it works. We're exploring into the unknown. You gotta follow hunches and rumors, because it is better than nothing. But you won't get anywhere from observation alone. Not to mention that it is incredibly easy to be deceived by your observations. You will find this story ring true countless times in the history of science. But better models always prevail because we challenge the status quo and we take risks. But the nature of it is that it is risky (noisy). There's nothing wrong with that. You just have to admit it.


  > There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.
Isn't this OP's point, though? People saw results, and even worked with what they saw, but underlying theories were all over the place and it wasn't until the time of Mendel that we started to have even the most rudimentary sense of rigor or scientific method when it came to the field that we now know as genetics. And the contention is that what came before Darwin and Mendel wouldn't stand up as rigorous science in our eyes, but was nevertheless the crucial foundation for what became the field of genetics.

In a way yes, but I'm saying their proposal of how to handle the situation is too strong: don't do experiments

  >>> there needs to be a long period ... where people aren't doing experiments and are just observing and describing.
I strongly disagree with this because observation isn't enough. You have to experiment.

Yes, it's fuzzy. But embrace the fuzziness. Acknowledge it. The truth is that observation isn't enough. You can NEVER discover truth from observation alone. Science doesn't work without interaction. There's three classes of casual structure: correlation, intervention, and counterfactual. We know the memes about the first, but the other two require participation. You'll get lucky and have some "natural experiments" but this is extremely limited. What I'm saying is that we can work with these issues without tying our hands behind our backs and shooting ourselves in the foot. Stuff being hard is no reason to handicap ourselves. I'm arguing that only makes it more difficult lol

I think one of the major issues is that we (scientists) fear that my openly discussing limitations and admitting that we don't have high (statistical) confidence will result in people not taking us seriously. And in many ways this is a reasonable response. I'm sure many scientists, myself included, have annoyingly found that an honest limitations section ends up just being ammunition for reviewers to reject the work. A criticism my advisor has given me is that I'm "too honest". Maybe he's right, but idk, I think that thinking is wrong. Because science is about ruling things out, not proving results (you can effectively achieve the latter by doing the former but you can't directly do the latter). And the younger the field is (e.g. my field of ML is VERY young), the noisier the results are.

Personally, I'd rather live in a world where we're overly open about limitations than not. We're adults and can recognize that's the reality, right? Because papers are communication from one expert to others? (And not to the general public, though they can see) Because as I see it, the openness is just letting others know what areas should be explored.

Don't fear the noise, embrace it. It's always there, you can't get rid of it, so trying to hide it only makes more.


You must observe before doing experiments. Otherwise you would not know if you are asking the right questions. It takes a lot of observation before you can gather enough data to ask intelligent questions. you can ask questions before gathering 'enough' data... and those questions are useful... but you won't know which questions will lead you down a wrong path (alchemy) until you observe why those questions don't make sense (chemistry).

tl;dr: you both are right, and are talking past each other.


  > tl;dr: you both are right, and are talking past each other.
Then you misunderstand my pushback. I fully agree, you must do both, doing both was explicitly my point.

parpfish said

  >>>> there needs to be a long period [...] where people aren't doing experiments and are just observing and describing.
This is what I pushed back against because I agree with you, you need both. I understand my first comment was not as clear, but the one you replied to I think I made this apparent.

Observation and experimentation are tightly coupled. They drive each other. You need not start with one and you can also perform them in parallel. I think observation driving experimentation is clearer, but the reverse happens too (coupled). But you need to intervene, because experimentation is much like bug hunting in software. You need to find out where things differ, and those are always in the rare and unexpected events. If you just observe, you wouldn't know what to observe. You'd have oodles of nominal data, and very little about the edges. This is why it will lead to wrong conclusions. Interventions is about making the black swans appear in a more timely manner. You say "okay, it works with this case, but what happens when I change this one variable?"

Every scientist is intimately familiar with this because when you begin to experiment, you find that you need to start observing different things than what you initially expected. We can draw parallels to coding, where you have likely experienced that you cannot formulate what the actual code will be until you start to get your hands dirty. Sometimes it is similar to what you expected and you only need minor changes, but many times you discover you had naive assumptions and things need to change drastically. It is easy to believe only minor changes happen because the drastic changes are composed of many minor ones. If you don't believe me, start documenting or go look at commit histories.

I'll give an example of a real life event: infrared radiation! Herschel had the belief that different colors were associated with different temperatures (which is true) and the experiment he devised was one thermometer to measure the colors and two for controls. The story goes that he decided to place one thermometer just beyond red and discovered that it was the highest of them all, but truth is he probably just put one of his controls there because it is a natural place to place them saw a higher temperature and said "what the fuck?" (doesn't really matter which is true). This, accident or intentional, was intervention. *It was experimentation that led to observation.* This led him to devise more experiments to determine what was actually going on. You would never get here from observation because you can't observe infrared without specialized equipment (e.g. thermometers). You also won't get the right conclusion if your observations are just that thermometers are warmer around dark objects (a known fact Herschel used!) nor by observing thermometers reading higher temperatures near glass (lenses explain this). What are you going to observe? It is invisible!

I should also mention that at least a decade earlier Pictet was experimenting with mirrors and was even able to demonstrate the transmission of heat without the transmission of light! Hershel was probably well aware of Pictet's work. But what Herschel's work did was narrow down what was actually happening in Pictet's experiment. Because he logically understood that the prism was separating the colors in light and that this is done so in a continuous manner, so is doesn't seem unreasonable to think there could be "unseen light" beyond the red (that does fade in intensity to the eye). It may also be unsurprising that a year after Herschel's work that Ritter discovered ultraviolet light, but for this to be confirmed a thermometer wasn't really sensitive enough, but he could use silver chloride, since it was known that violet light caused a faster reaction and after knowing about invisible light beyond red, why not search for invisible light beyond violet?

It may again be easy to frame these as "observation first" but that's not accurate, they are coupled. Most certainly Ritter went hunting, and hunting for something that had not been observed. Because frankly, the observation was the proof. You'd never get there "from observation alone" (or rather not until you have a huge population and there are just enough random events and you have enough time. But even then, many things probably would not happen within the lifetime of the sun).


> There was an understanding of natural selection even back to antiquity. How could there not be? Did people not tame the animals and plants? These are experiments, and they saw the results.

No, people did not know natural selection before Darwin. He spent decades collecting and then analyzing data collected in Galapagos Islands before he made his breakthrough.

It's pure hindsight bias to think that you can go from "I bred the fattest chickens together, who made a fatter chicken" to "Humans evolved from apes who evolved from single-cellular organisms". For millennia, people from all cultures believed that God created humans from the void. In the absence of data, that's as good a guess as you can have. If Darwin concocted his theory of natural selection before he had his data, no one would have believed him. By dismissing the theory of natural selection as something that was "obvious" pre-Darwin you are dismissing his life's work.


  > No, people did not know natural selection before Darwin
https://en.wikipedia.org/wiki/Natural_selection

Read the history. The topics discussed were are not reminiscent of evolution because hindsight, they are similar because they are similar. Darwin himself references many of these. People knew of artificial selection because they did it. You don't breed animals and plants without some basic knowledge here.

I'm not saying Darwin's work wasn't important. It's critical. But this does not warrant dishonoring and forgetting all those who did work that led to that point. Their work wasn't as ground breaking and influential, but they still helped set the stage. Darwin's work didn't appear out of a vacuum.

Science doesn't happen in leaps and bounds.

  > By dismissing the theory of natural selection as something that was "obvious" pre-Darwin you are dismissing his life's work.
This is a grave misinterpreting of what I mean. You have made mountains out of the pebbles I described. It can both be true that scatterings of the ideas exist, with only circumstantial evidence for them, while also monumental work be accomplished to legitimize those ideas and fill in so many holes.

Your basis of what the average person believes is also irrelevant. Even still many reject evolution and so many more did even just a decade or two ago. It was national debate not even a century ago.


By forcing formal study of the mind into the constrained methods used for studying the physical world, it allows the government and profit/power seekers to be the only actors free to use the methods that work best.

I wonder if this is purely a coincidence.


What methods are appropriate but forbidden by this “forcing” (and what’s causing the forcing exactly?)

Methods that do not adhere to the scientific method.

The forcing is performed by culture. Think back to COVID, the effect was on full display in various forms during that spectacle. Or, pick most any war that gets substantial coverage in the media.

I'm unfairly picking on science here a bit, the problem is ideologies in general.


As in… what methods?

Psychological studies have to adhere to various controls and what not. "The method".

Put some smart people in charge of an ambitious psychedelic powered initiative...but they have no controls. Not science? Cannot be science?

It's somewhat like how hip hop influences and accelerates white cultural evolution. Do this idea, see what happens when I do it.


Can be science! Doesn’t have to be science! Science folks do exploratory studies all the time to get a sense of whether the tree is worth barking up before they do the hard work of proving it.

Psychedelic studies might not suit themselves to FDA gold standard double-blind clinical trial science, but science overall involves a really broad toolkit. Psychedelics aren’t the only kind of intervention that can’t be safely or effectively blinded, and there are methods that attempt to demonstrate an effect anyway.

It seems to be thanks to people trying stuff as well as rigorous scientific studies that psychedelics are having their present-day moment in the sun (thanks, MAPS et al!). (My favorite part was that time Alexandria Ocasio Cortez cosponsored a psychedelics bill with arch-conservative, retired Navy SEAL Dan Crenshaw)

Things can be valuable without science proving it—nobody’s coming after the local preacher or imam or rabbi or shaman demanding that they scientifically prove their ministerial insights. And lord help the busybody who comes at a retired Navy SEAL trying psychedelic therapy for their post-traumatic stress…

I wonder, though—does it seem to you that psychedelics should be industrialized? There’s a fine line between “legal” and “aggressively marketed by drug companies.”


I think they should be studied using the full spectrum of approaches and people available to us, not restricted (officially/rhetorically) to an ideologically constrained and confused discipline like science as it is (which is not the same as its scriptures teach).

Anyone is free to learn therapy skills and use them in their own lives, you're just not allowed to call it therapy.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: