> You don't have a phd, you can't be first author on this paper
I'm going to go out on a limb and say that either you're in a very archaic field or institution, or that your PI is not a nice person. In the US at least, even undergraduates are routinely first authors on papers where they did the majority of work.
This was in tandem with a Canadian researcher, so maybe.
I suspect that part of it is that I'm not in any sort of academic pipeline (I got out of school with a BSc, essentially apprenticed myself under a nautical engineer for a few years, then opened a workshop).
Prescription medication can be extremely expensive without good (expensive) health insurance, especially if there is no "generic brand" version of the drug available. Think hundreds of dollars per refill. I think that's the pain point they're trying to solve.
some patients have reported costs in the $1-2K for prescriptions for just 1 month. That's definitely on the higher end but even if someone has multiple chronic diseases (diabetes, high blood pressure) co-pays of $10 or $20 dollars per medicine add up quickly.
This sounds like an excellent use case for blood-level monitoring for personalized medicine (or in this case, vitamins). I'm not sure how viable the hardware aspect is, but there are sure a lot of things you could do in software with a real-time stream of that data.
Not sure how, but this happened to me. Specifically, my bookmarks accumulated over years seem to have been wiped out when I did decide to sign in. Turns out I had already synced a new computer, so Chrome decided to helpfully delete my un-signed-in bookmarks with the blank set from the new computer.
So you're trying to build a realtime MapReduce? The other part about the diff-vs-batch tradeoff really depends on what the performance penalty of moving to a stream of diffs is going to be, over a batch. If it's uniformly better than batch processing, then you've also just invented a better batch processing transport.
MapReduce is actually an incredibly flexible paradigm. If you have a clean implementation of MapReduce with well defined interfaces you can make behave in a number of different ways by putting it on top of different storage engines. We've built a MapReduce engine as well as a the commit based storage that lets us slide gracefully from streaming to batched.
Our storage layer is built on top of btrfs, we haven't put together comprehensive benchmarks yet but our experience with btrfs is that there isn't a meaningful penalty to reading a batch of data from a commit based system compared to more traditional storage layers. I really wish I had concrete performance numbers to give you but we haven't had the time to put them together yet. I will say that our observations match with btrfs' reported performance numbers and are what I'd expect given the algorithmic complexity of their system.