Imagine that you have AI solution that do some calculations (e.g, stoke, bleeding in the brain). Often the calculations are very heavy and if you have a lot of data coming in you need to spread the load.
In dicom, a brain scan almost always span many, many dicom files. So when you push over one series, it can contain N images. Which means that there are N transactions to the server.
You cannot simply round-robin the load because all dicom files that belongs to the same series (study) must end up on the same worker.
So the goal of WolfPACS is to inspect the DICOM headers as they files come in, and route the files depending on the StudyUID and Calling AE.
So it is possible to associate certain workers with certain clients. So how a worker is picked is very flexible. Maybe you want to do some A/B testing. Or you want to take a worker offline "softly".
In dicom, a brain scan almost always span many, many dicom files. So when you push over one series, it can contain N images. Which means that there are N transactions to the server.
You cannot simply round-robin the load because all dicom files that belongs to the same series (study) must end up on the same worker.
So the goal of WolfPACS is to inspect the DICOM headers as they files come in, and route the files depending on the StudyUID and Calling AE.
So it is possible to associate certain workers with certain clients. So how a worker is picked is very flexible. Maybe you want to do some A/B testing. Or you want to take a worker offline "softly".
[1] https://github.com/wolfpacs/wolfpacs