A company I worked for had a new website built (their big customer facing domain). It was just a Drupal theme, responsive, absolutely nothing fancy.
An old friend of mine guessed $20k, maybe $30k tops if you were being crazy, and I thought even that was high. Back in our consulting days we probably would have quoted lower.
They brought on a company for $1 mil, ended up 6 months late and $1.6 mil.
Gotta love those project managers and "status update meetings" to burn the cash.
Most companies won't buy unless you can guarantee a support contract for at least two years with new features being included in it. This is why software prices just skyrocket easily.
You have to have at least one developer who makes sure that its authentication uses CAC, X.509 or at least Active Directory as its authentication source. You have to have at least one specialist developer who ensures that logs and audit trails are DCAA compliant.
You have to have at least one developer to ensure that the website is 509 compliant.
So on and so forth, and that's not even counting however many developers you need to actually build the basic functionality they've hired you to build.
On top of that, you need to have a proposals team capable of submitting a proposal to an FBO request for proposal, a contracts lawyer to ensure that you're meeting the contract terms with the technical solution you're providing, an accounts payable & accounts receivable team to team up with the contracts lawyer to figure out how to prove to the government that you actually delivered the module or widget they're insisting is deficient so that they can slough off paying you for as long as possible because they're in continuing resolution, a veteran / minority / woman / disadvantaged person with ownership stake to be eligible for program 8A opportunities, a project manager to actually manage the work, a program manager to try and upsell the opportunity and keep their ear to the ground to get the insider details on potentially up and coming work that they can pass to the proposals team, and a subcontractor with some kind of past qualifications in government work that you can leverage to ensure the government that you are, by proxy, also qualified to do that kind of work for that kind of government agency.
And of course, with the myriad, competing interests of the various stakeholders, a Herculean degree of risk aversion, and the fact that everybody wants you to fail, including the customer who didn't sign on to a risky project that will likely fail so are duty bound to ensure that it does fail and especially your "support team" of people who you have to ask for DNS entries, database tables, etc. that are actually competitors of yours that also bid on the work but obviously failed to win, they just want you to fail so the contract can get re-bid and they have another shot at winning it.
Of course, that's not the worst part - the worst part is if and when a project appears likely to succeed despite all the attempts at ensuring it doesn't? Then everybody and their mom becomes "active" stakeholders in your project because it's a chance to hitch their wagon up to a success that they can put on their resumes to get promoted... they want you to customizer your project to suit their needs so they can put their stamp on it so it provides a clear and measurable benefit to their division / department even though this work is clearly ad hoc, not in the contract, and they're extremely unlikely to want to pay you for it because of that.
Completely boring nothing-special website.
This realization you had is precisely one of the reasons for which I was very vocal in a recent thread  about a startup that wants to (watch out, projectile vomiting coming) make manufacturing "orders of magnitude" better.
The problem with a lot of these ideas is ignorance of what real product development and manufacturing entails in the context of real-life applications, regulatory constraints, liability and more.. They think that because you can iterate code fast while sipping a latte at Starbucks the same "logic" can be applied to manufacturing and, voilà, "orders of magnitude" better manufacturing.
Reality, as you have come to realize, is often far more complex that machining a few pieces of metal, throwing together a microprocessor board and writing code over a weekend. And every industry is different.
This is often why startups and SMEs can sell at lower cost too. The overheads are lower. I work for a medical devices company and our main competitor is more expensive than us and keeps having layoffs (or so it is rumoured). It's hard for them to cut prices for similar products when they have 10x the staff.
As an example, consider restaurant sanitation regulations. If restaurants are frequently unsafe, people will learn pretty quickly that it's better to eat at home. But if restaurants are generally seen as safe, people will eat out a lot more. That benefits all restaurant owners.
Regulation can harm at least as much as it can help. Consider that regulation is based on ideas that people think might help a situation, then add in some partisanship. When coming up with ideas, how likely is it that an idea is not going to be a good one vs it will do some good?
Perhaps it would be better if there was some way to incrementally evaluate regulations. Perhaps if they had a defined lifetime to force re-evaluation in light of actual experience. This would prevent at least one negative aspect of regulation - locking into practices that were thought to be safe and maybe were, but are no longer the best/safest way to do things.
My guess is about 99.9% of ideas are bad.
More to the point, many regulations expire or must be periodically re-evaluated by law.
This is ridiculous. Transmitting needs a good amount of energy for a high number of channels (several dozen, and quite often > 100), and at a high frequency. And if you are driving a 10MHz transducer, you will for ex drive it with a 80MHz numeric signal (at least when using a low number of levels, which you often wants to in order to keep an high efficiency for the relatively high power TX)... Citing a raspberry GPIO pin to do that shows that guy does not know what he is talking about.
Reception is also not trivial at all, if you want a decent quality. You also sample at a rate > to the centre frequency.
Of course you might be able to construct an amateur toy low-end ultrasound machine, but it would be of no clinical value (and of limited value for a lot of other purposes too). Also without extensive measurements, you should not use your resulting machine on any living thing...
In the end I would say the writer is right and you are wrong. He is painting with an extremely broad brush but his guesses are dead on the money. You should look at the links he provides at the bottom, there are exceptionally good DIY and open source ultrasonic machines.
Making the transducer is very, very difficult but he doesn't pretend it isn't.
The Sonix linked below has 32 40MHz 10bit ADC channels, which is a factor of 64 off from the 80 MHz figure if one were to assume 8 bit, and places us firmly into the realm of several-hundred-dollar ADCs.
Not impossible, but I'm pretty sure that's where much of the cost is hidden.
If you start to drive your TX path with "small" (depending on the number of channels) FPGA or CPLD, yeah this is more possible than with a raspberry pi GPIOs, this is even how some good ultrasound machines are build (depending on the volume)
In the end the cost of the hardware will be not far from the production cost of a "conventional" ultrasound machine. Because that industry has "low" volumes and very high R&D costs (especially for machines intended for diagnostic use), the difference between production cost and market price is very high. If you can eliminate part of that difference because you operate in another context, that's one way to drive the price down.
So yes, it should be possible to build one with limited capabilities for a few k$ per unit. (Spending various amount of money on R&D, depending what you're doing precisely.)
This is true, but this is only half of the story : the market price is also high because the elasticity of the demand for medical devices is really low. The hospitals charge a lot of money to their customers and their willingness to pay is also high.
I've developed software for customers working in the legal industry and it's the same kind of market: these people pay a lot for really simple stuff because they have money and want to get things done.
An AM5716 or 5726 would be $30-$40 (although buying one is a little trickier). Thats 7-8 processors in one, plus 1-2 DSPs. Plenty of power to handle all the processing required. Even a 128 channel device should only be ~$2000 in parts (minus the transducer). One with limited capabilities should be <$500 (again, minus transducer). Much less if you make a few more compromises in the ADCs.
We can make SDRs for incredibly low prices nowadays, the only real difference between that and ultrasound is the number of channels. The only reasons ultrasound machines cost more than 5 grand are economic inefficiencies. The engineering has been solid for a long while now.
I wonder if there's an equivalence to 'synthetic aperture radar' on Ultrasound. then maybe you don't need 32 channels
There is all kind of fun stuff that you can do with ultrasound, and some are similar to 'synthetic aperture radar'. However, 32 channel is still quite low. Lots of probes for humans are commonly using ~100 channels.
I would expect that they are looking at the return as an amplitude modulated and phase-shifted signal at the carrier frequency. They remove the carrier frequency and then look at the (much slower) modulation frequency.
I'm not sure what you gain from looking directly at the carrier frequency...
Agreed, but I wouldn't rule out the possibility that a crazy individual with sufficient hardware and software signal processing experience could make it work, where "work" allows for a significant tradeoff or two compared to a modern commercial machine.
DAC is probably the easy part -- use a LFSR and maybe some jellybean filters to get spectral content where you need it. 10MHz is slow enough that you could probably phase it in software if you handwrote everything in bare-metal assembly, paid attention to interrupts and DRAM, etc, etc. At first I thought you'd need a coprocessor (and in that case it might as well be a proper Spartan 6) but perhaps it can be done with a dedicated core. ADC is the probably the harder part. You can't kick the can down the road anymore. Both cost and value are tightly coupled to 2^(bit depth)*(sample rate). If you try to make a shitty ADC out of GPIO pins, you just wind up with a shitty ADC, you can't really hide the flaws and play up the strengths like you did with the DAC.
Or maybe I had it backwards and you could use a good DAC and some sort of sigma-delta like trick to make the shitty GPIO - ADC work.
Fun stuff to think about, but I suspect the real innovation will happen when someone in hardware realizes that DAC/ADC/FPGA technology has slowly and surely advanced far enough that they can do to the ultrasound market what the DS1054Z did to the oscilloscope market.
OTOH, the single transducer doppler ultrasound devices that's used in early pregnancy to hear the baby heartbeat can be had for $30 on ebay.
For-parts ultrasound arrays (which are usually broken because of obsolescence, frayed wires, cracked cases, bent connectors, etc, rather than destroyed ceramics) cost tens of dollars on ebay. But I agree with your overall point: hardware hacking ain't cheap. A $3k bench is north of desperation territory but still a fraction of what entry-level EEs get at the company I work for, and we're not in any line of business that would be considered "performance" by test equipment standards.
Nope, they basically use a series of pulses at 10 MHz and rely on the frequency response of the transducer to turn it into a nice wave packet.
Source (look at pulseShape):
A 10MHz transducer will be resonant at 10MHz. If you drive it with a square wave at 10MHz, it will naturally respond best at frequencies that are 10MHz, and will very poorly respond to frequencies that are not 10MHz. Thus, a 10MHz square wave driving a 10MHz transducer will produce a pretty good 10MHz sine wave sound signal. This does depend on how sharp the resonance is of the transducer, but it shouldn't be that hard.
Ignoring that, creating a 10MHz RF sine wave isn't that hard using classical analog techniques: wifi chips in modern computers create a 5MHz carrier wave that can be amplified and doubled with some cheap off the shelf parts.
To be clear, a sampling rate that is only greater than the center frequency isn't very useful. You need the well known nyquist rate, twice the maximum frequency to avoid aliasing
For example, a signal that only has energy in the range of 9MHz to 11MHz has a center frequency of 10MHz, but a Bandwidth of 2 MHz. You could sample, and reconstruct, the signal perfectly with a sample-rate higher than 4 MHz. Any content between DC and 9MHz that would be present would distort the sampled waveform, though, and create artifacts.
Of course, nowadays most often one just uses a very fast ADC, and one can then choose a suitable sub-band by decimation (in a FPGA or ASIC), a kind of digital frequency filtering.
Undersampling was often used in older RF gear (and, maybe it still is for reaching the hightest frequencies? I don't know what's the current state of the art).
This is essentially like using a mixer in RF circuits. I'm pretty sure it's still used in many variations. Generally speaking you can't go high enough in sample rate to allow every RF signal (in the GHz possibly) to be sampled directly. It's always some sort of frequency shifting + filter. The key is to filter out frequencies out of the band before sampling and the band does not need to start from DC (0Hz).
I think it's pretty clear that he doesn't, but it's not like he's claiming to be an expert.
I'm sure such a device can be made much more cheaply than existing solutions, but the author doesn't seem to have a realistic idea of the difficulty.
This is a little more complex but the cost is going to be of the same order of magnitiude as a Pi
Power on the output side is the big one for sure, however. Also I'm not sure the transducers cited are anywhere near big enough.
From an economic perspective (and this article is about economics), companies release software when they hope that the cost of the freeloaders is outweighed by the number of benefit additional contributors will provide. (Or that broad adoption of the software has some other benefit to the opening company.)
Generally, the more broadly useful the software is, the more economic sense it makes to open the source.
I see no evidence that there would be any economic benefit for any company opening the source of their very specialized, very expensive software here.
So yeah, if it were free, it would cost less. OK then.
As opposed to paying a little more up front to a third party insurance agency or offering to pay more to the OEM for an optional insurance plan? I don't see your point.
A medical institution will be paying for the insurance one way or another no matter what. It's either fronted by the institution itself, or in the cost of the equipment that they purchase.
Company X, is considering using Bob's software to release medical device Y. If the cost of developing Bob's software in house > risks of using Bob's software then they can chose to use Bob's software. They can also audit Bob's software release a patch, and then use that version saving money and reducing risks.
Note, it's Company X not Bob that's taking on liability but Company X also get's profit from selling device Y.
A common restriction is to release the source code when distributing the software. But, that's not really an issue with medical devices as there is a physical device not just software involved.
The commercial breakdown occurs because each recipient also has the right to redistribute the source code and binaries, so Original Author Alice sells to Bob for $100, Bob sells to Charlie for $50, Charlie sells to Dan for $10, and Dan posts it online for everyone to get for free. Even if Dan's site goes offline, someone will create a mirror. Open-source licenses make this perfectly legal. Thus, open-source authors do not have an enduring market for selling their software.
The loophole for commercial OSS companies is usually something called dual-licensing, enabled by the unique "copyleft" provisions first introduced in the GPL.
Copyleft means that the license is infectious. Any code linked against GPL code also becomes GPL. If someone links against code that they cannot legally make GPL or refuses to distribute the linked code under the GPL's terms, they have violated the GPL and could be sued by copyright holders to enforce compliance, stop distribution, and/or seek damages.
This infectious element is why some people and companies are very cautious about the licenses on the open-source projects they use. Some household names have had some close calls by incorporating GPL code without fully understanding the ramifications (and some household names may be in hot water over this soon, as GPL violations are not entirely uncommon).
Copyleft is great for most pure open-source projects since it means that everyone has to share back not just their changes to the software, but also the stuff they build on top of your project. However, because open-source software usually doesn't sell well (as discussed above), it means that people who want to sell their software commercially cannot use any GPL code anywhere in their software -- unless the copyright holder also makes that code available under a non-GPL license that won't infect the linked software.
This allows people who want to use your code as a foundation or library in a commercial package to pay for a commercial, non-infectious license, and it allows people who don't need that to use the GPL version, which requires that their code becomes free too.
Dual-licensing is the way that many open-source software companies have survived and tried to harness the best of both worlds. TrollTech, who made Qt until they were acquired by Nokia (and then spun off after the Microsoft liquidation), is one such company that lived many years off the dual-licensing model.
The cost often really isn't from the writing of software, but from the regulatory, QA and hardware development typically. Which is all dwarfed by validation if you need to do a PMA, but typically devices like this will be a 510k. Software development directly is a bit more expensive that some areas, mostly because you really need to follow engineering practices (e.g. traceable V&V, full documentation, test & review practices, etc.). Usually that isn't the dominant cost of bringing a device to market.
or, you know, someone donates some of their time to write it. that's usually how OS works.
think about how much research the person writing the article has done. if you could buy a $25 transducer off ebay, they could have put that time into writing the software.
Sure, those high school and university students building up their github portfolio for job interviews.
The majority of serious open source projects are backed by hardware or consulting companies, and usually fade away when that support is no longer there.
And who knows, the hobby project might transform into a company. Does that make it an instant "serious open source project"? Got to start somewhere.
All the popular open source projects succeeded because developers got paid. This fantasy of large scale complex projects with unpaid experts writing code, developing tests, and creating documentation, etc is just that, a fantasy.
Oh really? Let's see now.
> We're not talking about a general open source project with a potentially large userbase. So what you're essentially advocating is that we have to wait till a few hundred people serendipitously have enough time to dedicate to their hobby of writing control software for medical devices. And after that they have to validate the code, test it and ensure its correctness.
Not at all. A start would be a proof of concept device with barely working software. Maybe even not that, maybe the first PoC uses an oscilloscope for visualising the data. Then someone takes e.g. a Beagleboard and dumps the data via USB into a small Python script. V2 might add a bit of a colour map with matplotlib, maybe a GUI or just live updating Jupyter notebook. That's a start.
Why does everybody assume you'd want to replace the medical devices? Did we even read the same article? The authors even say that
> "[c]reating any device for medical purposes can be incredibly expensive, but this ignores all the other uses that ultrasounds can have in education, imaging, sports training and just for fun".
Except for sports training, do we need medical certification or perfect accuracy? No. So why is it so hard to believe that one person couldn't knock something together in a weekend if the transducers were available? You know, for fun, out of curiosity?
And anyone can already do that. Creating a prototype is like 5% of the work. Nobody really cares about it unless you can ACTUALLY use it for ACTUAL stuff. I work in industrial automation and I have several hacked-together prototypes where I'm running some scripts or software on a micro-controller. And those protypes do "cool" stuff for a fraction of the cost. But there is absolutely no way my customers would ever ever consider using a prototype PLC for anything, not even testing. So I don't think you're getting it. This isn't Linux where your end users are mostly software/technical people. Normal laypeople are not interested in testing experimental stuff like this.
>Why does everybody assume you'd want to replace the medical devices? Did we even read the same article? The authors even say that
Because it would be unethical to produce medical diagnostic devices (or even call them that) without proper code review, validation, and so on.
>Except for sports training, do we need medical certification or perfect accuracy? No. So why is it so hard to believe that one person couldn't knock something together in a weekend if the transducers were available?
Okay so yeah I totally believe you can put together a janky POS that barely works and is unreliable. But without proper validation the results such systems produce are entirely useless.
> You know, for fun, out of curiosity?
Yes, I agree. You can do _ANYTHING_ for your own amusement, fun or curiosity. BUT... if you want to make something that is useful to other people you have to do a bit more work. And that work doesn't come cheap.
Hmm maybe if you can find a way to lock-in users so they have to always come to you for the hardware. So, one way to do that is on a project that sees a lot of source code churn, you can tightly couple it with your services/hardware. Any competition that's downstream will find it hard to keep up with your commits in addition to having to patch in support for different hardware/services.
Lols, is this how you view open source?
That sentence has nothing to do with my own opinion of free software.
But I think the gist of the post is that the hardware cost of a high end ultrasound unit is rapidly decreasing due to advances in ADC technology and the general trend of moving more and more functionality into DSP that used to require specialized analog circuitry.
For example, I recently bought a 20MHz spectrum analyzer and oscilloscope with built-in tracking generator for $145. Gear of similar quality would have cost tens of thousands of dollars just a decade ago.
For filter work that'd be near perfect.
Maybe I should get one of those and a USB DJ controller for input.
Good idea about mapping some knobs to the UI.
I've used it to measure harmonics when testing some low pass filters, and the readings match some other gear I own.
Nice. Does it work under Linux too?
The windows software seems to be under active development and I was able to skype chat with one of the engineers before I bought it to ask a few questions.
I actually chose biology over computer science because of problems like this. Now, I don't think I have all the knowledge necessary to build an ultrasound myself, but at least I have the ability to read the literature and make sense of it, and I can understand the language radiologists and doctors use to describe an ultrasound.
I don't think the programming would actually be that hard. It's basically sonar for people. There are tons of builds of devices that use time-of-flight to produce images. I think you could actually get something reasonable working pretty quickly if you had access to testing apparatus and a radiologist.
I don't have the skills to build the ultrasound machine myself, but I'm not an EE. I don't think the programming is a huge barrier though.
Quick edit: I would probably try emulating a system with physical lenses first. It seems like an easier problem. There was an article on Hackaday a while back about a guy who built a phased array radar in his garage, but it seems harder than the physical lens version:
One quote from the article which I think is pretty relevant:
"If you are willing to trade acquisition time for cost you could implement a much less expensive near-field array using switching techniques"
And here's an article on a DIY Ultrasound development kit:
 I specifically wanted to do bioinformatics, but the field pays poorly, and also requires an advanced education.
I'm sure you know more about signal processing than I do, but trust me when I say a simple diagnostic ultrasound is a pretty rudimentary bit of kit. Most medical imaging is pretty simple actually (not accounting for signal acquisition). Radiologists are trained to read fairly abstract charts, and they want as little processing as possible. Imagine if a CT machine tried lining up images, rather than presenting the raw slices. That might make sense for mapping data, but if you were trying to diagnose a displacement of something, like a broken bone, having the image "fixed" wouldn't do you much good.
That's part of the reason why older ultrasound images of babies are so inscrutable to the casual observer. Since the technician is slowly sweeping a 1d or 2d array by hand, the printed image ends up looking pretty weird if the baby moves. An ultrasound can be 100 db inside the womb, so the baby tends to start moving when the ultrasound is performed. The horrible images aren't much of a problem, because the images they give to the parents aren't really used for diagnosis. They use the monitor for that purpose. If there is something the tech wants to explore further, they just look at that area some more.
Based on my limited knowledge of SAR, it seems like the processing is way more important because you are working with data that has been captured in the past.
Edit: Edited for clarity, and added source
This patent, from 1989, indicates that most ultrasound transducers are either single element or linear arrays. It was the earliest patent I could find with a cursory look that had a 3 dimentional array.
Regardless, even Wikipedia suggests that most of the arrays used for medical imaging use either a single element or a phased array:
To generate a 2D-image, the ultrasonic beam is swept. A transducer may be swept mechanically by rotating or swinging. Or a 1D phased array transducer may be used to sweep the beam electronically. The received data is processed and used to construct the image. The image is then a 2D representation of the slice into the body.
3D images can be generated by acquiring a series of adjacent 2D images. Commonly a specialised probe that mechanically scans a conventional 2D-image transducer is used. However, since the mechanical scanning is slow, it is difficult to make 3D images of moving tissues. Recently, 2D phased array transducers that can sweep the beam in 3D have been developed. These can image faster and can even be used to make live 3D images of a beating heart.
Point being, I think you are wrong. I'm not an EE, so I can't speak towards signal processing, but I am a biologist by training, and I don't see a clear reason why sonar principles wouldn't work. We are basically a bag of salt water.
Also, I am familiar enough with ultrasound to be sure that models with only a single transducer are very common. Hospitals and the like might be using the fancy-pants multi-dimensional arrays now, but the units we used to image things in college were definitely not multi-dimension. For one thing, they were older than the patent that demonstrated multi-dimensional arrays.
In sar, the speed of the medium doesn't change, in ultrasound it changes every few mm.
Other stuff (software, controllers) is also expensive, but it's probably a fraction of the transducer cost so buyers tolerate it in order to get manufacturer support/service contracts
Oversight, regulation, and certification is what prevents some applecart-upsetter from barging in to the market, charging $12/pill without any haggling. Instead, the incumbents can be tipped off by captured regulators that someone who won't play ball is on his way, so they can temporarily lower their price to $5/pill until the new jerk runs out of funding and dies. Then it's back to $100/pill. Heck, make it $1500/pill, pour encourager les autres.
However, companies aren't selling these pills directly to consumers, they go through pharmacies and doctors. If you don't have insurance or your insurance doesn't cover the medication, these pharmacies and doctors usually have multiple options at their disposal to increase the affordability of the medication, whether it's a manufacturer-provided financial hardship program, substituting the same medication from a different manufacturer (i.e., "generic version please"), assistance signing up for government-provided medical benefits like Medicaid, or something else.
Doctors do the same thing. Their "cash price" is a hyperinflated joke that exists only to anchor their negotiations with the insurance company. If you don't have insurance, you can and should ask about options to slash the sticker price. You can often get an instant 50% reduction just by asking.
The people who really get screwed are the people who have some type of billing snafu and end up with a cash price account in collections. This will haunt your credit report for at least 7 years and will make your life unpleasant in other ways, but even at this point you can usually negotiate a large discount. Such snafus can happen for various reasons and are unfortunately not rare, as you might guess from the ridiculous complexity of the system already described.
If the medical provider sues, it's possible a judgment could be entered against the individual for the cash amount that no one is ever expected to pay anyway. That is the worst case scenario, and it's bad, but even then most Americans are not stuck. They can file for bankruptcy protection and have the matter settled. In most states, bankruptcy will not require a person to surrender property that is needed for their daily maintenance, and some states have very strong homestead exemptions that ensure a homeowner will not be forced to sell his/her primary residence.
Of course, all of this is a massive disaster, but it should be known that in real life, virtually no one pays $100/pill. :)
So the trick is to get Americans to pay $10/pill at the point of purchase, and an additional $3 taken out of every paycheck, before taxes, whether they get the pill or not.
The US healthcare system is a cesspool of interlocking scams and cons. Those who genuinely want people to be healthy, and for sick people to get well, are constantly at war with those who operate under the assumption that a person will hand over everything they own (and maybe even some stuff that other people own) for a decent chance at not dying before they're ready to go--and then still charge a little extra to help someone die when they are ready.
Yeah, I totally agree. It badly needs rectification. The ACA just cemented the issues afaict.
You can make a useful ultrasound machine with just one transducer (e.g., to measure blood flow through the heart using Doppler, or using mechanical scanning).
You don't need fast CPU for processing, just downconvert to audio range and then 80286 is fast enough.
Source: wrote embedded software for one of those machines back in the day. 16 kB for everything, from keypad debouncing to GUI.
Regardless of who is doing the beamforming, the TX and RX analog parts are intrinsically quite expensive with at least dozen of channels.
A lot of is regulatory, but not completely. - other issues include
1) expensive software integrations for EMR and billing reasons
2) that most machines have multiple wands to attach
3) That billing codes tend to support the use of expensive machines rather than cheap ones if a doctor wants to make money
is probably related to billing codes/other non-embedded software
Proper medical devices have loads of safety features! They have isolated power supplies! They are tested in harsh environments! They fail in a predictable manner! There are regulations that need to be followed! New devices are still expensive because they are better!
Yes, electricity can kill you.
Yes, improper medical advice can kill you.
Yes, malfunctioning diagnostic equipment can lead to an incorrect diagnosis, which, yes, can kill you.
Yes, medical regulations exist and protect us from harm.
While a homebrew machine would not be able to compete with the latest and greatest, I'd hazard that even rudimentary diagnostic equipment could save thousands of lives a year in the developing world. These technologies are not new -- the medical ultrasound has been around for more than 60 years, and the EKG has been around for nearly 100. It seems insane that cost is still such a barrier for the machines used for medical diagnostics, when the price of other technologies has fallen so much in the same period of time.
I bought myself a Rigol DS1054Z, and I realized that I paid $400 for an oscilloscope that would have cost millions of dollars 30 years ago. I thought about the experiments I had done on neurons using the 50 year old oscilloscopes as part of my degree, and I realized that an ECG/EKG can be replicated pretty easily with an oscilloscope. It turns out, building an ECG is pretty trivial. It's not a 12-lead ECG, but it's also something I built out of parts I had on hand.
I don't see why other medical technology should be any different. Yes, unregulated medical technology is dangerous, but the risk doesn't seem to outweigh the potential benefit. If the parts to build these devices is cheaper and more accessible than they ever have been, and the equipment needed to build, test and calibrate the devices is cheaper and more accessible than ever, it seems like the devices themselves should be cheaper and more accessible than ever. I think there is a place for a $20k ultrasound, but when you live hours or days from an ultrasound, a cheaper option could save lives, even if the primary purpose is directing people to get a follow-up with the more capable machine.
There is interpretation that goes into reading an EKG, an xray, lab results. If there's any unreliability whatsoever, that will impact treatment and outcomes.
If there's any unreliability in your EKG readout, that's can be the different between diagnosing a minor heart attack versus chest pain.
The same goes for diagnosing a cancerous tumor from a benign one. There's already ambiguity and interpretation; if there's any imprecision in the measuring device, that's going to lead to bad outcomes-- both overtreatment and undertreatment.
But given the amount of interpretation and reading (it truly is an art, not a science) that goes into reading an EKG and more importantly, a tumor x-ray, I think in this situation, low res is worse than none, because it's worthless (you can't tell anything from it) and it still cost you something.
If you do try to do a cancer diagnosis on a poor x-ray, you are basing cancer treatment on the flip of a coin.
no cancer? Well you might fill them up with expensive chemotherapy, which itself is carcinogenic. Actually has cancer? Well, you've just set them loose with an organic time bomb in their system.
I personally think that performing a basic reading of an ultrasound or an ECG/EKG would be fairly simple with some training, but I also have a lot of experience reading raw data. I have a few friends who are Radiologists, and I'm going to have to ask them what they think. I'm really not sure what they'd think about all this, and I'm definitely open to the possibility that I'm massively overestimating my ability to interpret even "simple" diagnostic data.
But I digress...
Sources: asking the two vets I've brought birds to here, own visits to doctors
R&D and will be notably inferior with less capabilities than on high end systems, and you can probably also keep cost done if you skip some certifications.
You just don't see the volume that makes economies of scale kick in over your NRE(non-recurring engineering) costs.
Radio engineers are expensive and rare, scopes and even faraday cages are expensive, specialist software is expensive, handsets were a fortune when you had to buy hundreds of them.
We got sold for (I think) about $330 million, then you look at social media startups that go for a couple of billion. Nobody wants to do stuff that changes the world for a reason.
One of the responses to the question is a respectful, reasonable post; the other responds with the typical protectionist FUD that permeates this area. The second asks about the costs of misdiagnosis by people using it inappropriately, but ignores the lost rewards of appropriate uses that are curtailed, and the costs of overpayment due to lack of competition to determine what level of training and payment is actually appropriate.
The original article might be naive about some of the technological challenges associated with an ultrasound machine, but I think that's missing the point.
When a response to "why can't anyone buy an ultrasound machine" is to disingenuously reply "because you have to have the FDA ensure that it's working correctly and people aren't running around killing each other with it," it puts huge constraints on innovation and growth in this area. I can go buy a crowbar and kill people with it, so why can't I buy an ultrasound and use it to study muscle movement, or for education, like the author of the posted article is noting?
Plenty of technically sophisticated open-source efforts exist, and they can't happen if there's arbitrary and unnecessary prohibitions on them happening. Maybe if the FDA said "hey, go to it" people would realize it's too hard, but maybe they wouldn't. We'll never know as long as there's unnecessary restrictions in the way.
Nothing is stopping you making, buying, operating a toy ultrasound, so long as you make clear that it's a toy and not to be used in human health.
This isn't really how it works. The FDA will expect you to be able to produce evidence that you know it works correctly, but they're not responsible (or IMO capable) of determining whether devices work correctly. You make claims about what a device does, and you substantiate the claims with evidence. In addition, you create design and production controls that help avoid and mitigate device defects. The FDA can review your controls and defect records. The FDA will focus in particular on defects related to your device's hazards. Interestingly enough many medical device's hazards are related to the harm you might cause someone by accidentally dropping or shocking a patient or caregiver. While misdiagnosis-due-to-defect is a serious hazard, it can be mitigated to some extent by referring to the skilled physician/technician operating the device.
IMO the controls are good practices that most business would do anyways. They would give them special attention because of regulatory control. But it's true: the cost of audits and other items related exclusively to regulation aren't free, and do add some effective cost to the device.
> huge constraints on innovation and growth in this area.
I think fear of the regulation being overly burdensome does limit innovation here. Is it a net win? I'm not sure. IMO the government could mitigate this by offering DARPA-challenge style grant competitions and marketing regarding the scope of their regulatory domain.
This is roughly how surgical navigation systems work. The leaders in this area are companies like BrainLab and Medtronic, usually using optical tracking sensors (e.g. from Northern Digital), or sometimes electromagnetic, with accuracy in the 1-5mm range. For neurosurgery (the area I'm most familiar with), several companies offer tracked ultrasound integration, usually for live overlay and comparison with pre-operative MR or CT images.
> combine it with the positioning sensors of a VR system
For any folks interested in this, there is an active, excellent open source project called PLUS focused on ultrasound, tracking, and sensor data acquisition, as well as volume reconstruction . There's also an associated 3d visualization ecosystem .
I don't think vet budgets are very large or favour high tech solutions but maybe you know otherwise.
Varying degreees of pressure are used as part of the diagnostic process. For example, if you are trying to tell a vein from an artery, all things being equal, less force is required to compress a vein than an artery.
Typically ultrasound is used in an interactive way, not to generate static images for interpretation. This applies equally to diagnostics and procedural use.
It works via a different principle though.
See news here: http://www.mobihealthnews.com/content/clarius-mobile-health-...
See company website here: http://www.clarius.me/
That said the idea that an ultrasound machine is expensive is just laughable from a commercial/industrial cost perspective. Ok a mid range ultrasound machine costs $50,000.
$50,000!!!! Oh the agony!!! Oh please, a taxi costs $30,000 and I can get a taxi ride downtown for $8.
What's expensive is US medical pricing. Where they charge $700 for an ultrasound. When the machine itself has a capital cost of about $600/mo.
See http://sarahbuckley.com/ultrasound-scans-cause-for-concern and references quoted therein for a good overview of the current discussions on side effects of routine ultrasound screening, including tissue damage due to cavitation and hearing loss in fetuses.
The regulatory and evidence burden is hard - but its not insurmountable and it certainly shouldn't stop change.
Arrays of sensors is still hard, driving them, reading them, and interpreting them. That particular engineering problem hasn't gotten that much easier.
Instead the one they sell is locally made and costs around 10K tops. He did claim there is not much tech on these devices and I took him at his word then. It makes sense now.
Beware: Just my thoughts, i really don't know much about the market and the money needed. I am probably wrong.
A quick summary - they're not expensive, in fact for what you get they are remarkably inexpensive. There's a huge amount or work that goes into them, and the author of the original piece simply doesn't even know enough about the subject to realise he doesn't know what he's talking about.
In other words, you've provided some suggestions regarding the difficulty of the task, but have not in any way proven or convincingly demonstrated that sub-1k$ ultrasound imaging devices are an impossibility.
From my POV, having also spent many years designing imaging systems and other devices to measure the human body (ultrasound, EEG, EKG, MRI), the biggest challenge to providing cheap consumer-grade tech here is the regulatory burden. If anyone wants to buy a sub-1k$ ultrasound imaging device, there are plenty of unapproved models on Alibaba that work just fine -- provided you're willing to test the device yourself for safety, to ensure it meets your own risk tolerances.
While an ashtray may seem trivial, this example shows that in life-or-death situations, every detail must be considered and doing so is not cheap.
In medical devices this doesn't really work, and actually becomes more of a problem than a benefit, at least currently. You cannot change the constituent parts of your device without a lot of work, so a rapidly iterating supply can cause you trouble with getting a stable supply of parts for several years. This is (only) one of the reasons that medical grade monitors are so much more expensive than you might expect.
Or rather - if you sell a fixed number of items and could only reasonably justify a certain margin, then you don't want to make it cheaper.
If barriers to entry are high - then you sit on your cash cow.
This issue is very prominent in healthcare for both services and equipment.
It's a very costly problem.
Insurance companies want your bill to go up, not down, so they don't act as aggressively as they could to cut costs for small items. Hospitals - same.
Because of the vast costs associated with regulation, overhead, marketing and near monopoly on many products, combined with massive 'price inelasticity' on part of the buyer (i.e. you'll pay 'whatever' to get fixed) - you get a problem.
My parents both worked in pharma, it's an industry flush with cash - they spend big on everything, offices, equipment, staff. They have a doctors sense of entitlement - after all - they are 'saving lives'. And it is serious business, you can't hack your way through most of it.
So as the underlying expenses and regulatory costs go up - so do all the ancillary costs. Add that to the misaligned market incentives and price inelasticity ...
And you get unbelievably expensive healthcare.
I firmly believe you could train a smart person to do an x-ray and to reset a bone, put on a cast and to it for under $1K. And it would cost $10K probably in a hospital sans insurance.
Now - the first 'problem' in that scenario is that doctors are often paid to be good 'when things go wrong', and to get their yield way up (i.e. can't make mistakes) and both of those things are very expensive: you need to have 10 years of 'extra training' for the 1% of the time something weird happens.
Fair enough - but I still think many of those things can be parameterized.
Costs will not come down until their is an agent forcing it: the government, or preferably, another kind of provider.
Wallmart has an approach to business like no other: they force their suppliers to open their books a bit, force their costs down - and then pass all the savings onto the consumer. It's something few understand. Their strategy is volume, and they have an ethos of sucking producer surpluses right out of the value chain.
If Wallmart could feasibly get into the healthcare game on the low end, it could send waves right through the industry, which would be good.
The calculations of costs and the determination of the retail price is IMHO a masterpiece.
Imagine that your solution is a fraction of a percentage point worse than current treatment. Imagine there's 0.1% increase in harm.
The English NHS sees 1m patients every 36 hours.
In 2012 - 2013 there were 9 million ultrasounds.
0.1% of 9million is 9,000.
I wouldn't want to tell those 9,000 people that their treatment was, even though they got harmed, good enough.
And that 9,000 is just in England.
If you want to save money on ultrasound spending you probably want to reduce the numbers of ultrasounds being taken. Healthy pregnant women with no problems only need one ultrasound, but in some places they're offered very many more.
> In 2014, usage in the U.S. of the most common fetal-ultrasound procedures averaged 5.2 per delivery, up 92% from 2004, according to an analysis of data compiled for The Wall Street Journal by FAIR Health Inc., a nonprofit aggregator of insurance claims. Some women report getting scans at every doctor visit during pregnancy.
It's better to just cut out those needless, and potentially harmful, surplus tests than to add more needless tests with greater risks of harm.
Basically, places where buying $10k of diagnostic tools wouldn't be tenable, either because of the price, or because they would get stolen or damaged before they could "pay off".
Places where I think this might be useful are places like Nepal, Sudan, Pakistan, India, Niger, Mongolia, etc. Places that have low development and population densities.
I think a DIY instrument would be especially useful in India, given the fact that it has low development levels but a lot of highly educated individuals and a strong central government.
That being said, this is all speculation. I don't know enough about ANY of those areas to say whether people there would actually find tools like this useful. I'm definitely not suggesting we start filling shipping containers with cheap instruments and shipping them abroad.
There are also wireless ultrasound probes for around $800 USD - these could potentially come down in price too.
The reason commercial ultrasound sound machines cost 50-500k is that hospital are paying for:
- a brand name with reliability behind it
- the ultrasound rep to come and demo the machine a few times before and after it is purchased, as well as bring in some platters of food. Also the ultrasound machine reps also bring additional machines along for courses in use of the ultrasound for intern teaching etc.
- a support contract
The cheap, ubiquitous ultrasound machine seems like a great idea, and is useful in some settings (like ER especially), however there are some significant issues - probably best understood with the example of echocardiography.
Performing an echocardiogram (ultrasound of the heart) is a highly specialised field with neverending levels of complexity. Firstly, you are often dealing with inadequate images due to the patient's obesity or other anatomical factors, therefore less experienced operators get worse images which can make interpretation impossible. Even when you do get good pictures, it is a very subjective area and 2 operators will commonly have divergent results for the same scan. Thirdly the there are dozens and dozens of parameters which can be measured or calculated which are used as surrogates for functional measurement of the heart - these are being proven or disproven/or becoming fashionable or falling out of fashion over time.
Practicioners need to perform a certain number of echos per year to maintain base competency, and those with low numbers generally perform much worse than those who do echos every day.
A quick, goal directed, focused echo can yield useful results, and does work a lot better than a stethoscope, but then many would argue that allowing allcomers (physicians/ED docs/anaesthesiologists/ICU docs) to perform poor quality scans is a step back from having more specialised doctors performing fewer, high quality scans.
So overall, the issue is probably not that the machines are too expensive, it is that we have not worked out exactly who should be doing these scans. The truth lies somewhere between a very few people (ultrasound trained cardiologists/radiologists) and everyone, but we are not sure exactly where.
Overall I think that the technical advances here will not come from building cheap open source ultrasound machines (although it does sound fun!), but from improving ultrasound machines in their ability to acquire and interpret pictures themselves. This may be by having a remote telesonographer that guides and interprets a scan performed by a layman (eg a nurse) which would allow rapid, remote results without the telesonographer present (and could also allow utilisation of excess sonographers in some locations to places where they are scarce, or allow daytime sonographers on one side of the world to help scan patients at 3am on the other side, or allow utilisation of outsourced remote indian/filipino sonographers for cost savings.
Alternatively, new tech would perform scans automatically (eg. robotic arms, or human operator guided with instructions or haptic feedback) and then do tech guided interpretation - eg. generate all the important data from the information given and present it at a level appropriate to the person requesting the scan.