Hacker News new | past | comments | ask | show | jobs | submit login
Workarounds to Computer Access in Healthcare: Password or a Dead Patient? (2015) (dartmouth.edu)
66 points by todsacerdoti on Nov 25, 2023 | hide | past | favorite | 58 comments



Have had some level of security farce, like described in this paper, in every org I've ever been in.

The root cause is failing to hold security staff to a dual mandate: keeping the bad guys out AND keeping the good guys in.

Because they don't get held to account for obstructing everyone else's work, they naturally do every ass-covering piece of security theatre they can - they don't suffer any consequences.

The moment they're under carrot-and-stick for not interfering with productivity, as well as preventing intrusions, you get more acceptable outcomes.


Only tangentially related but my wife used to work at a private institute research lab which was affiliated with a large university, and therefore used computing resources.

The University IT security staff did a great job enabling 2FA across the entire network.. big job, well done. They didn’t do the slightest bit of requirements analysis of those people using the network such as affiliated research labs. One day my wife went to use the computer _in the PC3 lab_ and couldn’t without 2FA. Even logistically getting a computer in - that computer can then never leave the lab. PC3 labs don’t allow you to bring in a phone or anything really.. you’re double gowned, double gloved, goggles etc… hilarious.

Took them about a week to explain themselves sufficiently for IT security to fix it.


Great case in point. In a well run org, that would've been elevated to a "security incident" just the same as a hack, with their jobs potentially on the line, rather than a week of pleading to them to have mercy.


a company may very well consider this a “security incident” or other high-priority issue, but at least in my experience, the method of reporting something like this can be vague. While the right answer should be a place to submit a ticket, even internal systems nowadays seem to not let the users set the priority on tickets and instead let L1s try to resolve everything, who may not have the experience to recognize something needs to be elevated. At that point, it seems the only good option is to run the issue up the chain and back down through the right department (which is probably more effective, at the end of the day).


Yeah she was on the L1 helpdesk ticket merry-go-round for a couple of days before I told her what she needed to say to them to have a security team member look at it


> with their jobs potentially on the line

That would be a bit much and not accomplish improvements.


> PC3

Physical Containment Level 3 = Biosafety Level 3. I think.

https://en.wikipedia.org/wiki/Biosafety_level#Biosafety_leve...


There’s actually published evidence of exactly that!

Where Do Security Policies Come From found that the websites with the most aggressive policies were those with the least incentive to make the system easy-to-use. Big sites like Facebook and Paypal somehow manage to be safe without the strict password requirements of <random university intranet here>. Likely because they have financial incentive to make their systems easy-to-use.

https://www.microsoft.com/en-us/research/wp-content/uploads/...

(Disclaimer: I know one of the authors)


> keeping the bad guys out AND keeping the good guys in.

YES

> Because they don't get held to account for obstructing everyone else's work, they naturally do every ass-covering piece of security theatre they can - they don't suffer any consequences.

Every time you push back on some ridiculous policy they'll respond with ironic smirk or shrug (are you seriously forcing password change monthly and keeping hashes of the last 10 passwords?!). Nowadays basically every corporate security policy forces employees to install some kind of crappy DUO/whatever (FUCK YOU CISCO) app on a private phone, work phone is a rare occurence nowadays ("everyone have their own smartphone anyway"). It's never a TOTP which somehow works perfectly fine on many other crucial services. One might think we're all competent tech folks, but they're vicious malicious cunts.


At the exec level, the Chief Information Officer (CIO) is primarily responsible for maintaining availability of the network systems and data, but the Chief Information Security Officer (CISO) is responsible for securing systems and data. Sometimes they are same-level positions and compete for authority + resources. Sometimes CISO is subordinate to CIO. CIO supports profit centers, whereas CISO is entire a cost center and is focused on preventing losses.

The problem is that the competing interests aren’t exactly comparable apples-to-apples. Should every system get OS patches ASAP? Yes if you want to minimize the security surface area. No if the upgrade creates some adverse affect on employee/system. Testing can reveal the latter, but it takes time, which is the enemy of unpatched systems.

Low level IT workers are given instructions and incentives to follow established security procedures (including overseeing patching). Their supervisors are the ones who must make the call when it comes to how much testing is sufficient before forcing those patches.


This is a failure of the business, not security per se. A proper impact analysis of changes should be conducted along with risk analysis to determine if the change is even warranted.

Security themselves aren't usually qualified to know how things will affect everyone and all processes, any more than anyone else is in the business for things outside their domain.

If you allow one bit of the business to run rampant and put their needs ahead of all others, you will get bad outcomes, whether that department is security, or development or IT infrastructure, or marketing, or whoever.

So I disagree with your carrot and stick approach. It's not about holding people who may have a good reason to want something accountable for all the other possible negative effects of it. It requires a business that is properly managing itself and balancing all the differing needs within it.


"The moment they're under carrot-and-stick for not interfering with productivity, as well as preventing intrusions, you get more acceptable outcomes."

Suggestions on how to do this?


Give the COO/CTO the last say, not the CISO.

Usually it's the opposite, due to the asymmetry of a terrible unknown and claimed unquestionable remedies vs "it'll certainly slow us down to some degree". The person making the latter argument needs to have far more knowledge in areas outside their domain, and far better debate skills, to prevail. The person arguing the former need only say "hackers" in a solemn tone, to win by default.

Specific security measures should be added at "absolutely needed, beyond reasonable doubt" threshold, not "probably helps", as adjudicated by the leaders of the productive core of the business.

That'd go a long way to stopping them doing nonsense password rolling witch-doctoring.


> Give the COO/CTO the last say, not the CISO.

The problem with this is that when the COO/CTO doesn't understand or respect the CISO or their job, you end up with easily-preventable security breaches.

There need to be places where security is paramount.

There need to be places where user access/experience is paramount.

There are no hard and fast rules as to where these are. Human judgement is required, however messy that ends up being.


We did that for years and got more breaches and vulnerabilities…


Ah yes, the classic problems:

* log in/out take an unreasonable amount of time

* password rotation despite that being demonstrably worse for security across every metric

Followed up with some domain specifics:

* Failing to acknowledge a single system or device may be shared rapidly among many different users

* Delaying access due to excessive authentication steps can kill people

* Not understanding that a "quick action" is not quick if it requires another action that is not quick (surgery prep is not a 5 second task people)

The first set are inexcusable in any environment, especially password rotation, the latter just requires the people coming up with security policies to actually spend time in the environment their policies will apply.


I feel like the problem with security is the same as with HR. They’re so specialized they often have no other work, so spending a bunch of time on pointless policies just makes their job more secure for them.


I'm not in healthcare, but I've evaded annoying/inconvenient security crap.

One method I used for evading VPNs and using SSH was to have an script running inside the protected network, making an active connection to a SSH server on my machine at home, and set up a tunnel. Then I could use remote desktop through the tunnel.

I use the following TXR Lisp program to defeat screen timeouts on Windows:

  (typedef UINT uint)
  (typedef LPCWSTR wstr)
  (typedef HWND (cptr HWND))

  (defsymacro NULL cptr-null)
  (defsymacro MB_OK #x00000000)
  (defsymacro MB_ICONWARNING #x00000030)

  (typedef EXECUTION_STATE uint)

  (defsymacro ES_CONTINOUS #x80000000)
  (defsymacro ES_DISPLAY_REQUIRED #x00000002)

  (with-dyn-lib "user32.dll"
    (deffi MessageBox "MessageBoxW"
           int (HWND LPCWSTR LPCWSTR UINT)))

  (with-dyn-lib "kernel32.dll"
    (deffi SetThrExecState "SetThreadExecutionState"
           EXECUTION_STATE (EXECUTION_STATE)))

  (with-resources ((prevstate (SetThrExecState (logior ES_CONTINOUS
                                                       ES_DISPLAY_REQUIRED))
                              (SetThrExecState prevstate)))
    (MessageBox NULL "Screen lock disabled\n\runtil you click OK." "NoLock"
         (logior MB_OK MB_ICONWARNING)))
I wrote a Firefox extension called JP-Hash, which gives you a hotkey that converts text in an input field into hash made up of romanized Japanese syllables (and thus usually easy to type, if you have to, and possibly easy to pronounce and memorize). It includes an upper case letter, a digit and a symbol.

https://addons.mozilla.org/en-US/firefox/addon/jp-hash/

https://www.kylheku.com/cgit/jp-hash/about/


You are likely compromising your employer with this.

This is a reverse connection trojan basically and have been around since 2000+.

It's weird you haven't been caught yet.


Yep. I found one of my developers doing this once. I had a quiet word with him and told him to stop it, and that we would be actively monitoring. He got the message.

He could just as easily have been fired, or possibly even prosecuted, if someone else had caught him doing it. Maybe he should have been. He was a very good developer though!


I am not using that now. It was in the past.


This is pretty cool! I’ll have to try it out


Url changed from https://cohost.org/mononcqc/post/3647311-paper-you-want-my-p, which points to this.


It really seems to me that at least for healthcare the solution is just to go back to paper and then just take the paper and do data entry on it after it's out of the workflow on a daily or bidaily basis or something.

Honestly I'm also going to go out on a limb and guess that if most of the EHR portals screens were replaced with flat org files things would also work better.


The "paper security model" leaves quite a bit to be desired and has a lot of efficiency issues itself, but parts of it carried into an EMR wouldn't be terrible.

You might have a terminal that allows anyone to use it if an employee credential/bade is within a certain proximity of the device. You might setup terminals in access controlled areas don't even need this measure of security and are always logged in and available.

Automatic logoff might only happen if an actual security alert is issued for that floor or section. Then you might have a manager who can reopen specific terminals or wait for the security situation to end and have the terminals automatically reopen.

Then all you might need is the terminal to do constant "auditing" of user actions. It could take photographs and do facial recognition of hospital staff when a patient record or action is requested, it could alert internal IT when terminals or staff on one floor access patient information outside of their purview, it could record credential information from RFID badges near the terminal, it could store all this information in a log attached to the record and available to the internal audit team.


It’s complicated, the article mentions problems caused by this, but not problems it helps with.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6611086/ Looks to cut medication errors by 50%.


I thought the whole promise of electronic records was to avoid transcription errors -- if the doctor enters it into the record himself, there's less chance of someone misreading or mistyping his notes later.


That may be the medical purpose of electronic records, but there's a business purpose -- billing -- and an administrative purpose that includes security and regulatory compliance. All orchestrated by the same software.

If I were given one guess, it would be that the medical purpose for authentication is to avoid accidentally corrupting the data, e.g., typing Alice's prescription into Bob's screen. But the authentication technology is distorted by a security purpose that may be unnecessary and is being regularly bypassed.

For instance, just having people enter their initials into every entry or screen would go a long way towards eliminating erroneous entry, without being too burdensome.


The problem with what you describe is alert fatigue.

This IS the selling point, therefore we can catch all errors!

WARNING: patient medication dose higher than expected (override)

WARNING: patient already received dose within recommendations (override)

WARNING: patient is allergic to medication (override, mistake)


I don’t have the source on hand, but during my clinical informatics rotation it was mentioned that many early EMR implementations actually increased drug errors


Seeing my clinician fight the EMR to figure out what the stupid drug was called certainly didn’t help my impression of EMR systems anyway.


I wonder if that was a true rate change or a change in detection.


It was an increased rate of adverse effects due to drug mistakes, which is monitored and easily detected so unlikely due to just an increase in detection


Promise yes.

Reality on the ground is that there were governmental subsidies to implement them, so everyone got them.

How much they actually help vs create problems is still an open research problem


I've spent a lot of time working as an academic in / out of hospitals. There's a big difference between an academic research centre (where you have, comparatively speaking, a lot of time to sit and think) and the "coal face" of clinical medicine (where the opposite is true).

Literally every high-level doctor I have ever worked with:

– Knows the passwords of their juniors and shares their password with them

– Has a lot of mutual bottom-covering going on

– Finds hospital computers intolerably slow, incredibly locked down, and designed by "muppets" who don't have to use them

– Keeps the issued smart-card for accessing patient records required as part of a 2FA scheme to themselves on their lanyard because they're all functionally identical

– Keeps their "badges" of lanyards mostly on their neck but occasionally asked for mine / borrowed mine to open random doors

All of this comes from precisely the mindset of putting patient care above security theatre. They all share passwords all the time because, well, it's hard to type on a keyboard and order a drug or a kit if you're in the middle of doing a transoesophageal echo, say, yet it's incredibly apparent that it's needed quickly. We all get yelled at periodically never to do this. If I was the person with the cardiac complaint, I'd much rather get my drugs quickly. A lot of the examples in the article ring very, very true. Most places I have worked let every doctor look up the medical records of every patient, but keep an audit of who does so (and occasionally make a fuss if someone does something inappropriate). I did once have an awful conversation with hospital security refusing me access to the door I was on the secure side of, when I was in scrubs and had a human liver on a perfusion rig (along with colleagues), however, as it was after hours and my badge hadn't been given permissions to open the door in the middle of the night, and needed to get something on the other side of it. That was "fun". (We wedged the door open when they were gone).


Imagine that your embarrassingly slow software literally costs people's health (and maybe even lives in the long run). But we will still pretend like everything is fine, because "the DEveLopEr eXPeRiEncE", and "jUsT buY beTTeR hArDWaRe". And sure, hospital issued PCs aren't fast, but if they are anything from past decades, they still have processors running billions operations per second. And such managing software isn't something demanding either. The fact that it takes any thing longer then your IO bounded critical path is pathetic. It's just tragic how little we actually care today for end user experience, despite all the pretending.


> It's just tragic how little we actually care today for end user experience, despite all the pretending.

Most modern frameworks, languages, and concepts are developed and owned by ad companies. Then we take these and build critical stuff with it. Then they use it as a leverage to expand into industries where their stuff is used. There are some UX folks here and there but they're more happy to work on something "engaging" and "addicting".


This is not only in healthcare. This is everywhere. Although computers are much faster, we need to wait for them more.

And although i run 2d programs 99,99 % of the time, developers optimize for 3d. Although they are not able to draw a 3d button, the whole screen is drawn on a surface.


I have an orthopedist friend. She shared her credentials with one of her residents so he could write down the patient charts. Apparently residents are subhuman doctors in the hospital she worked at because they didn't get accounts to log into the system. The staff was used to sharing their own credentials with them so they could get work done. It was a routine everyday thing.

Then one day a patient died after surgery. Hospital got sued. They pulled up the charts as evidence. Whose name was in there? Hers. She had to legally prove beyond all doubt that she was physically not there in that hospital at the time the event happened and had to explain to a judge that sharing of credentials was routinely done in that hospital.


> None of this is really surprising to me; any inadequate system seems to have a tendency to create its own shadow workflow that hides problems by working around them.

This reminds me strongly of Seeing Like A State. The formal system only appears to work because it's supported by an informal system devised by actual users.

The system is created according to a "map" or abstraction of the real hospital. But the map is always lacking in detail, may be skewed in various ways, and results in an unrealistic system.


Original paper was posted here 7 years ago and also go no interest:

https://news.ycombinator.com/item?id=13332184


(well an hour after it was posted it was only at 3 votes and looked like it was going nowhere...)


All healthcare employees have badges they carry everywhere. Just get new ones with chips in them and an associated pin. Add the card scanner dongles for the computers. Access to rooms and supplies that don't need tight security is simply a card tap. Require the pin as well for more important stuff (computer login). Require a biometric as well for really important stuff.

Make sure employees know they must report lost/stolen cards within 1 hour or they will be fired.


You forgot the most important thing: make them work without failure. When an access system must communicate with the mothership and the mothership or network is down (hello Microsoft), you're doing it wrong.


Now you’ll have a bunch of scanners with the cards permanently on top, doors with a card hanging from the handle, and somehow all your staff is fired.

Mission accomplished?


When something goes wrong, they will collectively cover each other. When something goes catastrophically wrong, the public will learn about it and say "how it could function like that?!". Alternatively they will use the opportunity to get rid of someone unwanted. Healthcare professionals are awfully slimy and corrupted.


Just because I leave my card somewhere useful doesn't mean it's lost and just because Bob has my card doesn't mean it's stolen!


For people who really don't like passwords, use more biometrics?


Biometrics aren't always reliable, the article mentioned this anecdote:

Another example comes from a city hospital where creating a death certificate requires a doctor's digital thumbprint. Unfortunately for that hospital, there is a single doctor that has thumbs that the digital reader manages to scan, so the doctor ends up signing all the death certificates for that hospital regardless of whose patient the deceased was.


Smartcard + faceid?


> Smartcard + faceid?

You're missing the point.

The solution is for the tech people to stop thinking they're the center of the universe, and drop requirements that make no sense for their users.

In the case of the death certificate, that means dropping the extra authentication requirements for that computer application*, or just having the doctor print out the certificate and sign it, like it was always done.

* Software engineers suffer from a certain kind of brain rot where they assume the computer system itself is the last and only line of defense (so enforce onerous authentication requirements). In some cases that might be the right call, but in others is makes more sense to loosen the requirements and rely on other systems (e.g. auditing and after-the-fact investigation).


Identity and access management is a component of my work at a fintech. The argument isn’t lost on me. A death certificate is a powerful government signal, so while friction should be minimal so as to not be overly burdensome on the practitioner, it is not something you can be careless about from an identity perspective. It calls for strong provenance. If the technology is not transparent, that is a call to improve it to be so, not to ignore the requirements.

Doctors are also known for superiority complexes. Took forever just to get them to wash their hands to prevent contagion spread, historically speaking. They are not special snowflakes nor gods, but yet another stakeholder in a system. Their feedback should be used to improve systems as I mention above, to assist them but not block them. Change is inevitable.

https://www.cmu.edu/dietrich/sds/docs/loewenstein/physicianN...

https://www.nationalgeographic.com/history/article/handwashi...

(not a software engineer, head of security, including anti fraud measures)


The legal system can generally deal with anything that falls through the cracks.

There's probably not a whole lot of massively society-impacting fraud in the death certificate system if a whole hospital can run off of technically fraudulent records when one guy is signing them all.

You could cut down on that by relaxing the technological attempts to "improve" it all and just let the legal system and humans address any actual abuse.


In this case it’s less fraud and more the fact that it can take months or even years to undo a death certificate during which time one would experience undue hardship. Making it hard to sign is by design.


> In this case it’s less fraud and more the fact that it can take months or even years to undo a death certificate during which time one would experience undue hardship. Making it hard to sign is by design.

It should be noted that absolutely none of those goals were achieved by the above-mentioned poorly implemented biometric requirement. The only thing that achieved was inconvenience that needed to be worked around to get the real job done.


What if I misplaced my smart card what if I'm wearing mask or surgical hat.


I’ve seen rfid bracelets used for use cases where face and fingerprints are inaccessible, similar to contactless payments. Silicone straps for comfort.


The user may be wearing gloves and a mask.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: