Closed Proprietary image formats and systems HURTS patients. We used the local hospital for Chemo and everything else at the Children's Hospital 1.5 hours away for his legs and lungs. I would always have to wait 20-30 minutes to get a DVD of the studies (PET, CT Scan or MRI even ultrasound, but those are worthless) and then bring them to the doctor. The doctor would be forced to use whatever the portable image viewing program that came on the DVD and then they had to be sent to the IT Department to be imported into their system.
We would be there to remove some horrible tumor but before half his surgeries (I can't count how many surgeries he had) we would have to go in the day before (3 hour round trip) to get the expensive scan done again. One time I had a scan at 11 PM - Midnight and then drive home around 2 AM and be back at the hospital at 7 AM check in for a 10 hour surgery. ALL BECAUSE THE FORMATS ARE CLOSED and SYSTEMS could not connect so that my son's records were all the same every where. I carried 20 DVDs with me all the time just in case.
In case you are wondering my son unfortunately passed away after almost 5 years of fighting. If you are ever interested in giving to a cancer society please consider stbaldricks.org. Most charities give 0% or 2% to pediatric research and that is why we went over 20 years without a new chemo for children till last year, which St Baldrick's funded the research for this amazing new drug to fight a different type of cancer my son did not have.
Somebody I know, that work in procurement for a big hospital, explained to me how the manufactures of equipment play with this (in software and hardware) in order to capture the market.
It's negligence by our legislators allow this kind of thing (I hope is negligence and not greed).
Sorry for your lost. I can't even imagine how this kind of experience feels.
I work as a developer at a large-ish (>20,000 employees) healthcare system. Our CIO carries around a thick 2-inch binder from his infant son's medical treatments back in the mid-90's (his son died at around a year of age). The binder is meticulously organized and includes sections for various departments, complementary organizations (long-term care hospital, specialists, etc.), etc. It was their only way of coordinating care at that time. That experience led to the push in our system for electronic records back in the early 2000's, and we're continuing to pursue standardization and interoperability to make records accessible across departmental and organizational boundaries.
To every healthcare organization—for/non-profit—that sees interop as a threat to your ability to sell licenses and fill beds, fuck you. Your customer's lives are much more important than your supposed competitive advantages—and, in the long term, the companies that share data will be the winners.
Why? Because the standards are there!
The ACR/NEMA standard dates back to 1985(!) and is nowadays known as DICOM. Here in Germany/Europe, the medical imaging systems as well as the archive systems (PACS) are designed to follow that standard. DICOM covers everything, from the exact flavour of lossless(!) image compression, to the metadata structure which covers fiels more than you ever need to describe the patient, up to the archiving of data.
There are even Free Software / Open Source clients which do a great job in visualizing those datasets (OsiriX is a well known example).
I thought that this is an international standard. How can any hospital afford to buy crap medical devices that don't care about such an important 30-year old standard?
Did I get something wrong about DICOM?
Even with DICOM, not all encodings (either low-level, like value representations, or high-level like transfer syntaxes and image compression algorithms) are supported by all viewers/PACS.
Finally, every system assign patients their own local ID and does their own local numbering/coding for precedure IDs and so on. This adds delays and work when loading foreign images into a PACS.
I won't get into the bugs. I work in the field and I've seen some low-level DICOM handling functions riddled with quirks, exceptions and work-arounds for buggy imaging machines. With all that, we still see images that are utterly broken and only openable in the original system.
When doing the transfer over a network, it's often possible to negotiate down to explicit VR and Big-Endian uncompressed images which is required to be implemented and gets workable (if large, 500+MB for CT) images. With a CD/DVD, you're at the mercy of whatever syntax the creating device felt like using.
If you include PET and similar, then you also need proprietary PET/CT or PET/MR fusion software to actually make use of the images.
I absolutely agree it's ridiculous when they don't follow the standard, but the appeal of vendor lock-in is just too much. I don't know how to change that, unfortunately.
The data flows down (Disclaimer: I am not an xray/mri/hospital tech in any way, and might be wrong): IME, there's not much "importing" of patient data into the medical devices, so there's no reason for vendors to play along. The data flows from the machine, to the dr/patient/whoever, so the vendor has the ability to dump it out in any format they want.
Note: I'm not putting any developers working on social/enterprise apps at fault or blaming, simply suggesting many of those developers would be more than willing to help out on solutions, they just aren't aware of the problem.
If anyone is intrested here is a few links.
He had a bucket list and one of them was to write a comic book. When he passed the Comic Book Blogs honored him by sharing his story. http://www.unleashthefanboy.com/comics/inspirational-comic-w...
NPR made his obituary the obituary of the day. It actually wasn't something I knew about till weeks later and really made me glad to listen to.
Firstly, most medical images are actually being saved in a standardized format (DICOM, as other people mentioned) and I suppose that was the case for your son's records, too. However, the devil lies in the details, which is reflected in your experience.
While the images are saved as DICOMs, additional information (e.g. orientation of the image, patient data, etc.) can be saved to the DICOM meta-tags. There is some standardization as to which type of information to write into which meta-tag id, some manufacturers however have varying implementations of these standards. The result may be inconsistent meta-tag structure and naming.
Example: Trying to determine the type of MRI sequence (= an imaging technique) including general physical parameters like TR and TE from the DICOM files. Every clinic may name the sequence differently. The parameters TR and TE may be in different DICOM tags, depending on the MRI manufacturer. For a correct assessment of any foreign MRI image, the physician should know the parameters (they usually have no idea).
Secondly, regarding MRIs specifically, every clinic has a different so-called protocol for different diseases. A protocol consists of multiple sequences.
Example: For Brain Tumors, clinic A might want to take an MRI with the sequences X and Y. Clinic B may additionally want to use sequence Z. Furthermore, clinic B uses different variations of sequences X and Y with different parameters (think TE and TR). The surgeons in clinic B are used to planning their surgery on those specific MRI sequences. So the exact situation from your experience happens: Even though a patient provides all the images from other clinics, he/she will have to get the imaging done all over again. Additionally, this introduces totally unnecessary costs to the health system and patient.
This non-standardization in imaging sequences is hair-raising at least. And guess what, in my experience it's not even consistent within a given clinic within a specific disease.
Thirdly, regarding your experience with DVDs, they actually do include the DICOM files (in some strangely named directory, usually). However, they also include a totally crappy portable image viewing program which usually runs as autostart.exe (malware may be easily introduced here, on a sidenote). This viewing program is often a down-specced, completely outdated version of the original imaging program at the clinic. Most doctors outside big clinics do not understand that it makes far more sense to extract the DICOMs into a good DICOM viewer (e.g. OsiriX) for further assessment. Only big clinics usually have some sort of IT department which extracts the DICOMs and loads them into their PACS.
Reference: I did my doctors thesis (Germany) in MRI research and am nearly graduated from med school. Happy to finally contribute to HN :)
Really I have nothing else of value to add :-(
In a perfect world you could just e-mail the data without any concerns of authenticity, privacy, robustness etc. But you can't, so you would have to bring in a huge firm that is going to spend a lot of money and effort getting everything right or at least certified. Of course with all that effort they now have a huge incentive to keep thing proprietary since they want to recuperate their cost and the large barrier to entry make it even more advantageous because there will be little competition.
Innovation often requires COTS components to be available at a sufficient standard. No one would doubt this when it comes to something like AWS or how Tesla could make such a high level car in a relatively short time. Of course if you question the state of software you get downvoted into oblivion without reason on HN.
And if it wasn't clear, these (proprietary) systems very much exists.
My only issue is that there needs to be a requirement of inter-operate with each other. Sure take a copyright and a patent pending BUT make it so that other company systems can speak to each other. The parent shouldn't have to have the burden of "Holy Crap I am the only person in the world to know everything about my child." Doctors and nurses shouldn't have to have me retell everything a hundred times when 90% of that should be available.
Innovate PLEASE but play nice with other systems.
But almost all hospitals are stuck running out-of-date software that wasn't even good when it was released.
Some such software is released open-source, e.g. NifTK, FSL, MedINRIA. Some is spun out and then sold through vendors as a service, e.g. Siemens, icoMetrix, many others.
Why aren't things used more? I'd say the main bottlenecks are monetary, political and cultural. Doctors are often set in their ways, and hospital management staff are risk-averse. Moreover, because of legal liability issues, a lot of this stuff has not been formally approved for clinical use -- so hospitals won't touch it. This is especially true of the open-source software. Getting approved is an expensive and time-consuming process. Service contracts with approved software are expensive, mostly as a result of this.
Heck, look at the struggle involved in digital patient records, which is supposedly a simpler problem than imaging. In the UK, we've spent billions on that without getting anything usable. There are uniquely bad political reasons for that, but it's illustrative nonetheless.
That's fucked up.
Any half competent clinical DICOM viewer will have a full set of intensity windowing tool. Anything that doesn't should have it's authorization to market revoked (i.e.: FDA 510(k)).
With an eye on replacing said information systems, I've had a look at the open source medical records / hospital management systems available. When I looked at the details these systems are often not great replacements. So you're replacing aging, poorly written information systems with aging / non user friendly / difficult to customise information systems.
I would like to suggest some things for you guys:
1. Instead of creating one large hospital management system from scratch, how about smaller systems that can be linked together? eg. patient records system / laboratory system / pharmacy dispensing system / billing system / etc. The systems I mention here have fairly minimal dependencies between each other. This gives you the time to create a best of breed system before moving onto other stuff. It also allows hospitals to be able to use your stuff without ripping out everything they already have!
2. Think about how a hospital would customise your system. New fields, forms, reports, workflows, logic, etc. And how these customisations would survive an upgrade of the core system.
Anyway, I hope you have success with the project and I wish you luck. I'll definitely be keeping an eye on it!
Please avoid too much diversification by reinventing wheels, instead please contribute to one of these projects.
Mobile first is a basic requirement in developing countries.
When you say "mobile first is a basic requirement" you're thinking about a specific context of software intended for the general population of people in those countries.
However, with projects like HospitalRun, the general population is not the audience, Hospital staff and administrators are, and even in developing countries they still primarily use laptops and tablets at the Hospital.
Smart-phone may not be the mobile target you're thinking of here.
It's about targeting contribution and letting potential contributors know what we're using so that they can know and self-select.
Goal #1 of the site currently is recruiting contributors b/c we aren't at a 1.0 release yet. We have it running in several hospitals in the charitable network that I serve, but it's not ready for wider distribution yet.
We want to get there by July 2016. Once we get there, we'll refocus the purpose of the homepage.
I've seen situations in real life where someone will say "hey this tool is cool, how did they make it?" and another person "oh they used framework X, I think." A good chunk of people will then do their due diligence and discover that it was Framework Y not Framework X... but not everyone.
There are some people still to this day, for example, who believe that AngularJS powers Youtube's desktop site.
You're likely recalling the recent article on HN that showed Ember taking longer to download the first time vs. React on a mobile phone.
In a third-world offline hospital records app, the first time difference isn't especially important, and also it's more likely the hospital record keepers will use tablets, not phones.
They're currently working on a way to get past this, by actually enabling server-rendering (much like react):
What surprised me most is that the UI does not seem to be mobile responsive, and does not work well on smartphones. I would have guessed that in developing countries mobile use would be hugely important?
The UI is still a massive WIP, but I am working on making it fully responsive down to tablet size. Our goal is to accommodate viewports down to tablet because in hospitals where the software will be run (and is already running in a few CURE hospitals) medical and administrative staff will be using it from laptops and tablets.
Viewports lower than 600px are not currently a consideration, however, as we don't currently have a legitimate use case for the software on phones.
A laptop would also give you more time, but not as much as a smartphone.
It seems you have a nice focus in usability - efficacy, efficiency and satisfaction. For me, it seems vital to make IT useful and not a burden, reducing clinicians wasted time on non-clinical duties and their general distaste with the software they have to use. I'm a UX PhD, I have experience working with very particular groups of users, and I would be very motivated in working for better healthcare.
EDIT: Here's a link to the referred to project https://github.com/OptiKey/OptiKey/wiki
That said, there are open HL7 message "interface engines" and integrating with something like Mirth shouldn't be too difficult.
The thing is, every client implements a different subset of the spec, and nonconformance is widespread.
There is no "conforming" to a spec which is a spec in name only.
So long as it doesn't do over night batching, it's already decades ahead. I wish I was joking.
Because I'd believe that you can't just make something that looks better and sell like you'd sell a typical SaaS product, there must be tons of regulations.
On other hand, while studying in the UK, I briefly met a PhD student who was making offline-first web apps to calculate the dosage of the medicine etc and hospitals in the UK where actually using them (and paying him to develop).
"At it's core the government need only do one thing to encourage innovation in the interoperability space and it is this:
The government, by means of regulation and incentive, ensure that any vendor of data systems that create or store data make adequate interoperability features and documentation available for said system.
I call this the Core Mandate. The core mandate must be unequivocal with no loopholes. What do I mean by "interoperability features"? Simply:
- If a system creates data, the ability to read that data is fully described in documentation.
- If a system stores data, the vendor will provide an API and/or SDK, with accompanying documentation, such that authenticated requests may create, read, update or delete that data programmatically as appropriate.
A system is defined as any software application or hardware device."
So I am actually finishing the development of a similar system for a group of anesthesiologists that needed a custom app to keep track of their patients and their pain medication.
Had I known of this project before I would have actually considered contributing/forking it to handle their use cases. See this hits pretty close home since I'm Colombian and hospitals here have terribly outdated systems.
I love the idea of the app working offline and syncing when internet is available since mobile networks here aren't verye reliable. One problem is,as others have mentioned, having it work on mobile is very important. I don't think it really is because of lack of PCs and desktops it's just that doctors are always running all over the place and it's more convienent for them to log the information on a smartphone/tablet.
Anyways my next project is also on the medical field and will have a wider scope so I'll keep an eye on this project for when the time comes, I'd love to contribute eventually.
- login screen takes several seconds to load
- not usable on screens/mobile devices
- seems like every click makes multiple server requests
(and loads for up to several seconds)
Thanks for asking. I'm sorry in advance that this reply is lengthy. I'll try to be succinct.
Starting HospitalRun in 2014 was a very intentional, deeply-considered decision on our part, made after reviewing a wide spectrum of the available open source and commercial solutions for our network of developing world charitable hospitals at CURE International.
We simply couldn't find a solution that we felt could both meet our requirements and be successfully deployed across our network. So without saying anything for or against other health system projects from the past or present, I'll say that the following were the driving factors for the decision to build HospitalRun.
1) Taking usability very seriously. For us, that doesn't just mean UX and clinical usability or even just administrative usability (since there's a lot more than just the doctors we're trying to serve). It also means easy to setup, easy to administer, and (and we're definitely not there yet) easy to contribute code. A developer is already doing A TON for a project like this by absorbing a totally unfamiliar set of business requirements. We want to make the experience of contributing to HospitalRun delightful. Like I said, that's aspirational - not reality yet... but that's part of what we think "1.0" looks like.
2) Architecture choices: we're committed to modern, cloud solutions while working in environments where Internet access is unreliable at best. Additionally, deployment devices need to be not just low cost but easy to setup and remotely manageable. In practice, we're deploying our early releases of HospitalRun with a small cloud instance paired with a local souped-up Mac mini with a battery backup. CouchDB gives us replication for free and some DNS magic makes it possible for a client device to never need to know if they're in the network or out on the Internet.
3) Offline access to data in a web browser: Again, we're working in areas where Internet reliability is sketchy at best, so beyond the architecture, we wanted to create a portion of the app that works EVEN when there's no connectivity at all. That's where PouchDB comes in. The offline piece still needs a lot of work, but we get syncing, etc more or less "for free" b/c of the great work in PouchDB . The objective is to create the ability for clinicians to carry records into the field, even when they can't connect. This is a real-world requirement from our hospitals that I saw first-hand back in 2011 and inspired us to create both HospitalRun and a previous research database (still in use today) throughout the CURE International network. (ref: https://blog.newrelic.com/2013/11/19/cure-uganda-improving-c...)
4) It's more than just software. We have a goal of codifying the implementation process, and equipping people who want to deploy the solution for a given hospital with the tools to make their volunteerism effective and meaningful as well as the facility successful. Again, that's an aspirational goal.
5) Real-life customers. HospitalRun has an advantage of being an open source project that is being sponsored by charitable network of hospitals in the developing world, seeking to provide the highest quality surgical care to some of the least served populations. Many of the core team members are interacting with real people who are colleagues, which we generally all agree is good for the requirements process. Together, we're trying to build a solution that not only meets the needs of those hospitals but (hopefully) hundreds or thousands more across the majority world.
It's important to underscore that HospitalRun is not at a 1.0 release yet. We have modules of the system deployed in several CURE hospitals, but there's a lot to do before we'd encourage people to deploy it.
Hopefully, exposure like this will help us meet the people, etc that get the project where we think it can, could, and needs to go for the children and families we serve at CURE as well as the thousands of other facilities across the majority/developing world who can be positively effected by it.
Btw, many of these points are found in a paper we wrote to respond to this line of questioning (b/c we asked ourselves the same question before pressing "start") http://goo.gl/NCJDnJ
The reality is that some of the biggest vendors (US and UK based) actually ship some of the worst software and charge hundreds of millions of pounds for it. Ultimately, its the patients that pay the price. It's a complete myth that bigger vendors build safer systems.
I agree with a previous poster in that given more visibility, I'm sure there are thousands of developers that would love to contribute to this type of project (myself included).
It's also pretty clear that mobile will be a key requirement for this type of solution. Not just a responsive website but full native implementations. I realise that this can be expensive, but if it's open source, I'm sure there are developers that would love to get involved. Maybe you would consider an API? This might encourage an ecosystem of client solutions to flourish.
Finally, do you think there is a role for a patient login here? There is a world-wide movement to encourage patients to play a more active role in managing their healthcare and giving patients mobile access to their health records is a great first step towards this.
I had a quick look at the demo and it looks like the development is in the early stages- a bit of (hopefully constructive) feedback: I think they (you?) may be trying to attempt to do (and cover) too many clinical disciplines at the same time- maybe implement individual modules (like patient registration) and test them in all (old!) browsers in more detail before moving on to the next. Also think long and hard about how you implement your data model (clinical indicators e.g. blood pressure often have a context and are temporal values, how do you model these?). This is a great effort and has lots of potential.
EDIT: also, the name seems to suggest to me like there is a run on hospitals- but that may be a personal thing.
The are using PouchDB + CouchDB for offline first / syncing which is how I came to notice it (I am a maintainer for PouchDB). So their ability to sync should be reliable and well tested and if not then its bugs for us to fix :)
The epitome of "User Needs"
In the next couple of weeks when it opens I aim to join the UK governments "Digital Services Framework" where UK government software is expected to be written as open source.
The hope and intention is that development of a UK government solution (that say relies upon CouchDB) and so will have taxpayer money delivering bug fixes to users in the less well off parts of the world.
It's the right approach - and one that will have long term benefits not just for end users of software, but subtle benefits for UK / western countries.
We are getting better at this as a species - slowly. And it's attitudes displayed here that will deliver the most value in the world - even if you might not be the one to capture the cash...
As for limits, desktop browsers are generally 'unlimited' mobile browsers can be stricter and often people use wrapper projects (like cordova etc) to get round that and have unlimited storage.
I'm assuming there'd need to be a services business installing and maintaining the software, and adding improvements particular customers need.
Medical software is a different beast, compared to, say, your next online shop or bloggin plattform.
Bugs in medical software can (and will) kill people. My work takes me to medical software development courses on a regular base and they usually consist of looking at the ways, medical software can kill people; sadly often enough by example.
One case for example was a PACS system where due to a bug in the way the database managed timestamps only the very first of a series of pictures was shown to the user. And since once could not navigate the pictures via "previous" and "next", but one always had to go through a purely text based menu (this was in the mid 1990-ies, where memory was scarse) it was not immediately apparent that only the first image in a time series was shown. Enter the patient with a tumor; when it was finally realized the images the medtech took were not the same the specialist saw it was already too late for the patient and tumor growth progression too far advanced.
So your medical database software kills someone (prescription error due to wrong dataset shown or such), how do you determine whose liability it is/was? Medical software certification is in large part about identifying what harms to a patient could be done and which parts of the software may cause it. You don't even rule out in a "this can't happen" way, but it goes like:
- patient dies: no matter how well it was tested, these are the possible offenders in the program
- patient gets seriously harmed: no matter how well it was tested, these are the possible offenders in the program
- patient gets injured: no matter how well it was tested, these are the possible offenders in the program
The bottom line is you're ending up with something that is either close to or outright is waterfall.
And even more important: These are the components a program uses, what possible failure modes are there that could harm patients. So they're using CouchDB? Well, is CouchDB medically certified (AFAIK not), so this is considered SOUP (Software Of Unknown Pedigree) which means that to legally use this in medical applications you have to certify the SOUP yourself.
Oh and putting medical records into the Cloud? What could possibly go wrong…
Unless we are talking foolish regulation, FOSS backed by taxpayer development money is likely to produce better and more open, transparent and testable code than proprietary companies competing against each other.
Government regulation perhaps should focus on building automated test suites as a first high level bar.
You cannot just install some software and start putting patient data into it.
Each country has a regulation process how the said software can be used and there are international standards as well.
This area is known as "Electronic Health Record" and "Health information technology" systems
For example, for devices : ISO IEC 62304 Medical device software – Software life cycle processes
I guess this kind to enforce on an Open Source projects with "open contributions".
It's important to remember that EMRs are there to support healthcare professionals who are used to working with incomplete / incorrect information all the time. For example, a patient isn't going to have a drug dispensed to them without a pharmacist checking it (and, frequently, asking the prescribing doctor to clarify/amend questionable doses or instructions).
That is indeed the case (medical record software not being classified as a medical device software product in Europe). However medical record software has been added to also be covered some ISO 80xxx standard. However in the USA medical record software is considered a class B or even class C medical product by the FDA, depending on if the software is used just for record keeping or actively used for therapy planning.
If they are followed is another matter.
This project looks good, but it does remind me of the issues with management systems for public services. With the level of requests we deal with from school users that "need" features to satisfy management/governors/government it makes it very hard to do open source and be suitable.
Things have long moved on from that period, but a few of us are just curious to see if the code is still around from that time or if our product is one mavhc was talking about (unlikely).