Hacker Newsnew | comments | show | ask | jobs | submit | d0mine's comments login

It is known that the reliability of psychiatric diagnoses is non-existent https://en.wikipedia.org/wiki/Rosenhan_experiment

Edit: It is ok to downvote if you disagree. But think what it says about you if you are downvoting facts.

-----


That links to an experiment that -proves- suggests it's possible to game psychiatric diagnosis, not that mental illness doesn't exist.

edit: btw the way, I've been here for like 5 years now and I still don't know what flavor of Markdown is used here.

-----


Linked from the FAQ at the bottom of the page: https://news.ycombinator.com/formatdoc

-----


Thanks. So it's sorta like "quarterdown".

-----


Yes. The experiment proves that we are in astrology stage of development regarding mental illnesses. It doesn't mean that stars or planets do not exist. It just means that astronomy is yet to be born. Psychiatry awaits its Copernicus.

btw, I don't see how could you explain the second part of the experiment where were no fake patients at all as "gaming". If a mere expectation of a fake patient screws the results so much; your methods are garbage. This and other counter-arguments are mentioned on the wiki-page.

-----


> But think what it says about you if you are downvoting facts.

You posted something from 40 years ago.

-----


Do you think if the same experiment were to be repeated today; the results would be different?

The link mentions related experiments with similar results as recent as 2008.

Anyway, judging by "The Diagnostic and Statistical Manual of Mental Disorders", Fifth Edition (DSM-5) (2013) there has been no paradigm shift in the last 40 years.

It seems DSM-5 is bad even by psychiatry's own standards. People use phrases like "false 'epidemics'" https://en.wikipedia.org/wiki/DSM-5#Criticism

New DSM-5 Ignores Biology of Mental Illness http://www.scientificamerican.com/article/new-dsm5-ignores-b...

-----


> Do you think if the same experiment were to be repeated today; the results would be different?

Yes. Big institutions have closed. Smaller hospitals replaced them and most people get treated in the community.

For example: my county has a population of about 600,000 people. About 4,500 people are on the books of the local MH trust at any time. There are only about 150 inpatient beds at any time.

People only go into hospital if they are a danger to themselves or others - hearing a voice that says "empty" or "hollow" or "thud" is not something anyone would be hospitalised for.

To get a diagnosis of schizophrenia you have to match the symptoms over a six month period and be assessed by the same doctor during that time. one of the benefits of the DSM / ICD is that doctors now use a set of standards when diagnosing mental illness.

The paper suggests that this is only a problem with psychiatry. This is something you hear from lot of people. "No test exists to diagnose a psychiatric illness, thus psychiatry is a sham. Look, I can get diagnosed if I lie to doctors!" This ignores the fact that there are a bunch of physical-health diseases you can get diagnosed with if you lie to doctors, but we don't call those a sham. Your link even talks about that:

> Many defended psychiatry, arguing that as psychiatric diagnosis relies largely on the patient's report of their experiences, faking their presence no more demonstrates problems with psychiatric diagnosis than lying about other medical symptoms. In this vein, psychiatrist Robert Spitzer quoted Kety in a 1975 criticism of Rosenhan's study :[5]

>If I were to drink a quart of blood and, concealing what I had done, come to the emergency room of any hospital vomiting blood, the behavior of the staff would be quite predictable. If they labeled and treated me as having a bleeding peptic ulcer, I doubt that I could argue convincingly that medical science does not know how to diagnose that condition.

Your link mentions RD Laing in the lead. RD Laing is now thoroughly discredited. (Although he did have some interesting ideas). You should read some of his books.

> Many defended psychiatry, arguing that as psychiatric diagnosis relies largely on the patient's report of their experiences, faking their presence no more demonstrates problems with psychiatric diagnosis than lying about other medical symptoms. In this vein, psychiatrist Robert Spitzer quoted Kety in a 1975 criticism of Rosenhan's study :[5] If I were to drink a quart of blood and, concealing what I had done, come to the emergency room of any hospital vomiting blood, the behavior of the staff would be quite predictable. If they labeled and treated me as having a bleeding peptic ulcer, I doubt that I could argue convincingly that medical science does not know how to diagnose that condition.

> The link mentions related experiments with similar results as recent as 2008

No it does not.

There's a probable hoax from someone who can't provide any evidence, from 2004; and a totally different "experiment" (tv show) from 2008 where doctors were not allowed to interact with patients. You'd get similar results if the patients had physical health ailments.

-----


How does the closing of big institutions improve the scientific rigor of a psychiatric diagnosis?

The misunderstanding is probably my fault -- the message was too short. Let's establish the basics: There can't be any science if you can't measure. Reproducibility (replication): theory and experiment (prediction <-> measurement spiral) are corner stones that support each other and build on the language of math.

"the results" -- as the very first message I've posted on the topic says -- are about the reliability of psychiatric diagnoses. If we can't have that (an ability to measure) there is nothing to discuss.

On "150 inpatient beds" -- how many people are in jail instead? And again, how does it relate to scientific foundations of psychiatry?

Both "lye to the doctor" excuse and "bleeding peptic ulcer" are addressed on the very same wiki-page where the quotes come from.

"Your link mentions RD Laing in the lead. RD Laing is now thoroughly discredited." Here's the only occurrence of the name in the article:

"while listening to one of R. D. Laing's lectures that Rosenhan wondered if there was a way in which the reliability of psychiatric diagnoses could be tested experimentally."

Does "the lead" mean "an inspiration"? In what way precisely anything about RD Laing changes the results of Rosenhan experiment?

I can go on pointing out imprecise statements but it don't see the point. What would be nice to see is even a single reference to an experiment that shows that yes we can reliably diagnose psychiatric illnesses and here's what changed in out assumptions -- where is the paradigm shift?

On 2008 experiment: the result: "The experts correctly diagnosed two of the ten patients, misdiagnosed one patient, and incorrectly identified two healthy patients as having mental health problems. If you think that the result:

  psychiatric diagnoses are unreliable
is invalid due to the way the experiment was conducted (some methodological issues) then do point them out.

> doctors were not allowed to interact with patients.

Diagnosing a mental illness is a serious business with long term consequences. Have any of the "experts" refused to diagnose on the account of insufficient information? -- I don't know.

I'd like to be proven wrong and see psychiatry in the hard science camp.

Have I mentioned that psychiatric diagnoses are unreliable ;)

-----


The facts you present don't support an answer to the above question.

-----


"the install-time keys shipped on your system" -- given Rust's rapid pace of development, I highly doubt that default packages that are provided e.g., by Debian would be relevant.

To get a reasonably fresh version, you would need to use something like Ubuntu's ppa where you need to trust the ppa's author and TLS, to get you signed packages and the corresponding keys.

"keys + signed package" via TLS from a known (via google) site is more secure than "human readable sh script" via TLS but it is not by leaps and bounds.

-----


related discussion: https://news.ycombinator.com/item?id=8553625

-----


I doubt that there are no existing libraries in Python that allow to work with discrete probability distributions. Not that it stops anyone from implementing their own.

-----


`curl | sh` is protected by a ssl certificate issued by CA that my system trusts.

"pgp-signed installer file" is protected by a key from a stranger.

Both are insecure.

Could you enumerate several points that show "pgp-signed installer file is __a lot better__ than curl | sh" ?

-----


Say we're trying to download the PHP source code from php.net. If all that's protecting us is SSL, then if an adversary compromises the php.net servers (happens all the time, and actually has happened to PHP), they can immediately replace all the downloads with backdoored copies. Whereas with a PGP signature, the key can be stored off-line (even on an air-gapped system), so that even if the web server is compromised, the adversary can't make me believe the file is legit.

PGP can also be used in a trust-on-first-use manner. Get the public key once over an insecure channel, and if the attacker missed that single opportunity, you're safe until the key changes. With SSL, on the other hand, you're at risk every single time you make a connection, because any of hundreds of CAs has the power to sign that certificate, and as above, you have to assume the web server isn't compromised.

Another reason PGP is important is mirroring. Big F/OSS projects let others' volunteer mirrors. Even if those mirrors support SSL and the transport from the author to the mirror is encrypted, there's absolutely no guarantee that the mirrors themselves are not malicious. The mirrors could be backdooring their own files. The fact that you have an SSL connection to the mirror doesn't do anything to prevent this. But with PGP signatures, you're assured the files come from the software's developer, and haven't been tampered with by the mirror.

So the difference is: SSL secures the connection between your browser and the web server. PGP ensures you're getting the file the software developer intended you to get. It's a semantic difference.

I'd also argue hard against `curl | sh` for (assuming it exists) the psychological effect of teaching users that it's OK to pipe random things from the web into sh.

-----


If an adversary compromises php.net server then naturally the files will be signed by adversary's keys. In both cases, all you need is the ability to replace files (the package files, signature files, html instructions), nothing else. I don't see how gpg is __a lot more__ secure here.

You need some other channel to communicate what keys should sign what files. You need some other channel to import the keys. Catch 22.

"trust-on-first-use" do you mean something like the certificate pinning?

Let's consider https://www.torproject.org It uses GPG signatures for its packages.

If SSL is compromised as you say then all I need to fool you is to give you files that are signed using my keys (unless you know that you should use 0x416F061063FEE659 key (magic secure channel) and you've already imported it (again the magic channel) and tor never changes the key.

Where should I go to check that 0x416F061063FEE659 and pool.sks-keyservers.net are correct values (google?) if we assume that the key and the key server (and the connection to it) are not compromised?

Ask yourself: when was the last time that you've tried to check that the instructions that show the key fingerprint, the key server to use are genuine?

Also, If you have paper walls I wouldn't try too hard to make the door impenetrable. It is a trade-off: if you are downloading code from stranger's repository then you won't get much from replacing `curl https://github.com/... | sh` with a gpg-signed (by the same stranger) download.

Security is like onion rings: there are layers but it is only as strong as its weakest link. We know that a real adversary will just hack your machine if necessary.

-----


> No it's not because MSI packages and EXEs are signed

Is it more difficult to provide your own fake exe installer than to middle-man https that curl examples use?

-----


The question is, is it easier to place your code on that curl site. If there is some web-layer vulnerability, you can put your own code there.

With signed MSIs and EXEs, you'd need to get your code signed, which is probably more difficult than the web layer.

-----


50% survive.

-----


The number I've seen is 30% and I've haven't seen anything written on the quality of life afterwards...

But either way, a 50% survival rate under circumstances where I am more likely to infect those I love the longer I'm alive makes it a much easier choice. At an R0 of 2, I'm statistically likely to take one other life, who will go on to take one other life etc. At an R0 of greater than 2, I'm going to result of many more people. Only at an R0 less than 2 is it even reasonable to consider trying your chances to stay alive.

-----


50% is for the current outbreak in West Africa.

Total Cases: 9937 Total Deaths: 4877

http://www.cdc.gov/vhf/ebola/outbreaks/2014-west-africa/inde...

-----


Vikings founded Kievan Rus' according to some sources. Ukraine, Belarus, Russia are the modern countries.

-----


David Beazley after analyzing 1.5 Tbytes of C++ code shows in "Some Lessons Learned": C++ -- SUCKS, Assembly code -- ROCKS http://www.youtube.com/watch?v=RZ4Sn-Y7AP8#t=2049

-----


> Choosing UTF-8 aims to treat formatting text for communication with the user as "just a display issue". It's a low impact design that will "just work" for a lot of software, but it comes at a price:

> - because encoding consistency checks are mostly avoided, data in different encodings may be freely concatenated and passed on to other applications. Such data is typically not usable by the receiving application.

> - for interfaces without encoding information available, it is often necessary to assume an appropriate encoding in order to display information to the user, or to transform it to a different encoding for communication with another system that may not share the local system's encoding assumptions. These assumptions may not be correct, but won't necessarily cause an error - the data may just be silently misinterpreted as something other than what was originally intended.

> - because data is generally decoded far from where it was introduced, it can be difficult to discover the origin of encoding errors.

It seems surrogateescape error handler introduces these issues back (only for Unicode strings instead of bytes this time) i.e., a+b is no longer well-defined (a, b Unicode strings may have lone surrogates due to different reasons). JSON permits lone surrogates in its strings so the data can easily spread over the network too.

> - as a variable width encoding, it is more difficult to develop efficient string manipulation algorithms for UTF-8. Algorithms originally designed for fixed width encodings will no longer work.

> - as a specific instance of the previous point, it isn't possible to split UTF-8 encoded text at arbitrary locations. Care needs to be taken to ensure splits only occur at code point boundaries.

It seems like a premature optimization to claim that UTF-8 being variable-width encoding is a performance bottleneck in most applications.

And if we want to show the data to a user then we should handle user-perceived characters (\X regex) that may span several Unicode codepoints e.g., to avoid splitting a Unicode string inside a character.

---

Unicode by default is more complex than the article makes it appear to be e.g., see 🎅 𝕹 𝖔 𝕸 𝖆 𝖌 𝖎 𝖈 𝕭 𝖚 𝖑 𝖑 𝖊 𝖙 🎅 (it is for Perl but Unicode issues are mostly universal) https://stackoverflow.com/a/6163129

-----

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: