How Open-Source Software is
Shapping the Future of Healthcare?
Miguel Xochicale, sr-rse@arc-ucl
mxochicale/
open-healthcare-slides
7-March-2025; Guest lecture at University of Bristol (grid-worms-animation 2023 by saforem2)
Figure 1: Medical AI translational challenges between system development and routine clinical application
Li, Zhongwen, Lei Wang, Xuefang Wu, Jiewei Jiang, Wei Qiang, He Xie, Hongjian Zhou, Shanjun Wu, Yi Shao, and Wei Chen. “Artificial intelligence in ophthalmology: The path to the real-world clinic.” Cell Reports Medicine 4, no. 7 (2023).
https://www.iso.org/standard/38421.html
US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper: https://www.fda.gov/files/medical%20devices/published/US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper.pdf
Figure 2: AI standarisation landscape
Janaćković, G., Vasović, D. and Vasović, B., 2024. ARTIFICIAL INTELLIGENCE STANDARDISATION EFFORTS. ENGINEERING MANAGEMENT AND COMPETITIVENESS (EMC 2024), p.250.
Oviedo, Jesús, Moisés Rodriguez, Andrea Trenta, Dino Cannas, Domenico Natale, and Mario Piattini. “ISO/IEC quality standards for AI engineering.” Computer Science Review 54 (2024): 100681.
Fetal Ultrasound Image Synthesis
Wright-Gilbertson M. 2014 in PhD thesis; https://en.wikipedia.org/wiki/Gestational_age; National-Health-Service 2021. Screening for down’s syndrome, edwards’ syndrome and patau’s syndrome. https://www.nhs.uk/pregnancy/your- pregnancy- care
Sciortino et al. in Computers in Biology and Medicine 2017 https://doi.org/10.1016/j.compbiomed.2017.01.008; He et al. in Front. Med. 2021 https://doi.org/10.3389/fmed.2021.729978
Fetal Brain Ultrasound Image Dataset
Burgos-Artizzu, X et al. (2020). FETAL PLANES DB: Common maternal-fetal ultrasound images [Data set]. In Nature Scientific Reports (1.0, Vol. 10, p. 10200). Zenodo. https://doi.org/10.5281/zenodo.3904280
Fetal Brain Ultrasound Image Dataset
Burgos-Artizzu, X et al. (2020). FETAL PLANES DB: Common maternal-fetal ultrasound images [Data set]. In Nature Scientific Reports (1.0, Vol. 10, p. 10200). Zenodo. https://doi.org/10.5281/zenodo.3904280
Fetal Brain Ultrasound Image Dataset
Burgos-Artizzu, X et al. (2020). FETAL PLANES DB: Common maternal-fetal ultrasound images [Data set]. In Nature Scientific Reports (1.0, Vol. 10, p. 10200). Zenodo. https://doi.org/10.5281/zenodo.3904280
GAN-based fetal imaging
Diffusion-Super-Resolution-GAN (DSR-GAN) Transformer-based-GAN
M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/xfetus/midl2023
Quaility of synthesised images are evaluated with Frechet inception distance (FID), measuring the distance between distributions of synthetised and original images (Heusel et al., 2017).
The lower the FID number is, the more similar the synthetised images are to the original ones. FID metric showed to work well with fetal head US compared to other metrics (Bautista et al., 2012).
M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/xfetus/midl2023
M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/xfetus/midl2023
M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/xfetus/midl2023
M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/xfetus/midl2023
A Python-based library for synthesising ultrasound images of fetal development
Future work
🔧 Developing real-time AI applications for diagnosis using Open-source software
Figure 3: End-to-end pipeline of development and deployment of real-time AI apps for surgery
Figure 4: Operator: An operator is the most basic unit of work in this framework.
import os
from argparse import ArgumentParser
from holoscan.core import Application
from holoscan.operators import (
FormatConverterOp,
HolovizOp,
InferenceOp,
SegmentationPostprocessorOp,
VideoStreamReplayerOp,
)
from holoscan.resources import UnboundedAllocator
class BYOMApp(Application):
def __init__(self, data):
"""Initialize the application
Parameters
----------
data : Location to the data
"""
super().__init__()
# set name
self.name = "BYOM App"
if data == "none":
data = os.environ.get("HOLOSCAN_INPUT_PATH", "../data")
self.sample_data_path = data
self.model_path = os.path.join(os.path.dirname(__file__), "../model")
self.model_path_map = {
"byom_model": os.path.join(self.model_path, "identity_model.onnx"),
}
self.video_dir = os.path.join(self.sample_data_path, "racerx")
if not os.path.exists(self.video_dir):
raise ValueError(f"Could not find video data:{self.video_dir=}")
# Define the workflow
self.add_flow(source, viz, {("output", "receivers")})
self.add_flow(source, preprocessor, {("output", "source_video")})
self.add_flow(preprocessor, inference, {("tensor", "receivers")})
self.add_flow(inference, postprocessor, {("transmitter", "in_tensor")})
self.add_flow(postprocessor, viz, {("out_tensor", "receivers")})
def main(config_file, data):
app = BYOMApp(data=data)
# if the --config command line argument was provided, it will override this config_file
app.config(config_file)
app.run()
if __name__ == "__main__":
# Parse args
parser = ArgumentParser(description="BYOM demo application.")
parser.add_argument(
"-d",
"--data",
default="none",
help=("Set the data path"),
)
args = parser.parse_args()
config_file = os.path.join(os.path.dirname(__file__), "byom.yaml")
main(config_file=config_file, data=args.data)
%YAML 1.2
replayer: # VideoStreamReplayer
basename: "racerx"
frame_rate: 0 # as specified in timestamps
repeat: true # default: false
realtime: true # default: true
count: 0 # default: 0 (no frame count restriction)
preprocessor: # FormatConverter
out_tensor_name: source_video
out_dtype: "float32"
resize_width: 512
resize_height: 512
inference: # Inference
backend: "trt"
pre_processor_map:
"byom_model": ["source_video"]
inference_map:
"byom_model": ["output"]
postprocessor: # SegmentationPostprocessor
in_tensor_name: output
# network_output_type: None
data_format: nchw
viz: # Holoviz
width: 854
height: 480
color_lut: [
[0.65, 0.81, 0.89, 0.1],
]
Real-time AI diagnosis for endoscopic pituitary surgery
real-time-ai-for-surgery
Figure 6: Getting started documentation provide with a range of links to setup, use, run and debug application including github workflow.
real-time-ai-for-surgery
real-time-ai-for-surgery
multi-ai.py
...
# Define the workflow
if is_v4l2:
self.add_flow(source, viz, {("signal", "receivers")})
self.add_flow(source, preprocessor_v4l2, {("signal", "source_video")})
self.add_flow(source, preprocessor_phasenet_v4l2, {("signal", "source_video")})
for op in [preprocessor_v4l2, preprocessor_phasenet_v4l2]:
self.add_flow(op, multi_ai_inference_v4l2, {("", "receivers")})
### connect infereceOp to postprocessors
self.add_flow(
multi_ai_inference_v4l2, multiheadOp, {("transmitter", "in_tensor_postproOp")}
)
self.add_flow(multi_ai_inference_v4l2, segpostprocessor, {("transmitter", "")})
self.add_flow(multi_ai_inference_v4l2, phasenetOp, {("", "in")})
else:
self.add_flow(source, viz, {("", "receivers")})
self.add_flow(source, preprocessor_replayer, {("output", "source_video")})
self.add_flow(source, preprocessor_phasenet_replayer, {("output", "source_video")})
for op in [preprocessor_replayer, preprocessor_phasenet_replayer]:
self.add_flow(op, multi_ai_inference_replayer, {("", "receivers")})
### connect infereceOp to postprocessors
self.add_flow(
multi_ai_inference_replayer, multiheadOp, {("transmitter", "in_tensor_postproOp")}
)
self.add_flow(multi_ai_inference_replayer, segpostprocessor, {("transmitter", "")})
self.add_flow(multi_ai_inference_replayer, phasenetOp, {("", "in")})
## connect postprocessors outputs for visualisation with holoviz
self.add_flow(multiheadOp, viz, {("out_tensor_postproOp", "receivers")})
self.add_flow(segpostprocessor, viz, {("", "receivers")})
self.add_flow(phasenetOp, viz, {("out", "receivers")})
self.add_flow(phasenetOp, viz, {("output_specs", "input_specs")})
...
multi-ai.yaml
...
multi_ai_inference_v4l2:
#
#
# Multi-AI Inference Operator InferenceOp()
#
#
backend: "trt"
pre_processor_map:
"pit_surg_model": ["prepro_v4l2"]
"phasenet_model": ["prepro_PNv4l2"]
inference_map:
"pit_surg_model": ["segmentation_masks", "landmarks"]
"phasenet_model": ["out"]
enable_fp16: False
parallel_inference: true # optional param, default to true
infer_on_cpu: false # optional param, default to false
input_on_cuda: true # optional param, default to true
output_on_cuda: true # optional param, default to true
transmit_on_cuda: true # optional param, default to true
is_engine_path: false # optional param, default to false
multi_ai_inference_replayer:
#
#
# Multi-AI Inference Operator InferenceOp()
#
#
backend: "trt"
pre_processor_map:
"pit_surg_model": ["prepro_replayer"]
"phasenet_model": ["prepro_PNreplayer"]
inference_map:
"pit_surg_model": ["segmentation_masks", "landmarks"]
"phasenet_model": ["out"]
enable_fp16: False
parallel_inference: true # optional param, default to true
infer_on_cpu: false # optional param, default to false
input_on_cuda: true # optional param, default to true
output_on_cuda: true # optional param, default to true
transmit_on_cuda: true # optional param, default to true
is_engine_path: false # optional param, default to false
...
real-time-ai-for-surgery
Figure 7: real-time-ai-for-surgery follows the Contributor Covenant Code of Conduct. Contributions, issues and feature requests are welcome.
real-time-ai-for-surgery
real-time-ai-for-surgery
oocular: Open-source Care Using SOTA AI for Real-time monitoring and diagnosis
Figure 8: Practical application of AI in all common ophthalmic imaging modalities
Li, Zhongwen, Lei Wang, Xuefang Wu, Jiewei Jiang, Wei Qiang, He Xie, Hongjian Zhou, Shanjun Wu, Yi Shao, and Wei Chen. “Artificial intelligence in ophthalmology: The path to the real-world clinic.” Cell Reports Medicine 4, no. 7 (2023).
[1] Sarvananthan, Nagini, Mylvaganam Surendran, Eryl O. Roberts, Sunila Jain, Shery Thomas, Nitant Shah, Frank A. Proudlock et al. “The prevalence of nystagmus: the Leicestershire nystagmus survey.” Investigative ophthalmology & visual science 50, no. 11 (2009): 5201-5206.
Figure 9: End-to-end workflow
A Python-based library for REal-time Ai Diagnosis for nYstagmus
Future work
Yao, Chang, Menghan Hu, Qingli Li, Guangtao Zhai, and Xiao-Ping Zhang. “Transclaw u-net: claw u-net with transformers for medical image segmentation.” In 2022 5th International Conference on Information Communication and Signal Processing (ICICSP), pp. 280-284. IEEE, 2022.
Liu, L., Wu, X., Lin, D., Zhao, L., Li, M., Yun, D., Lin, Z., Pang, J., Li, L., Wu, Y. and Lai, W., 2023. DeepFundus: a flow-cytometry-like image quality classifier for boosting the whole life cycle of medical artificial intelligence. Cell Reports Medicine, 4(2).
Open-Source Software in Surgical, Biomedical and AI Technologies
A full-day workshop featuring 12 speakers and 6 panelists from industry, academia, and clinical practice, along with poster and abstract submissions.
Key dates
Connect with us
How to shape the future of Healthcare using Open-Source Software?
Braune, Katarina, Sufyan Hussain, and Rayhan Lal. “The first regulatory clearance of an open-source automated insulin delivery algorithm.” Journal of Diabetes Science and Technology 17, no. 5 (2023): 1139-1141. DOI Citations
Benjamens, S., Dhunnoo, P. and Meskó, B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. npj Digit. Med. 3, 118 (2020).