Can Open-Source Software
Revolutionise Healthcare?


 

Miguel Xochicale, PhD
mxochicale/open-healthcare-slides

Overview

My trajectory

🔧 🏥 From bench to bedside

Regulation vs innovation in MedTech

IEC 62304 standard for software

https://www.iso.org/standard/38421.html

Good ML practices by FDA

US-FDA-Artificial-Intelligence-and-Machine-Learning-Discussion-Paper

FDA-approved AI-based Medical Devices

Benjamens, S., Dhunnoo, P. and Meskó, B. The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database. npj Digit. Med. 3, 118 (2020).

🏥 Challenges in the AI clinical translation

Figure 1: Medical AI translational challenges between system development and routine clinical application

Li, Zhongwen, Lei Wang, Xuefang Wu, Jiewei Jiang, Wei Qiang, He Xie, Hongjian Zhou, Shanjun Wu, Yi Shao, and Wei Chen. “Artificial intelligence in ophthalmology: The path to the real-world clinic.” Cell Reports Medicine 4, no. 7 (2023).

Use cases: Fetal Ultrasound Image Synthesis

Dating Ultrasound Scan (12 week scan)

Wright-Gilbertson M. 2014 in PhD thesis; https://en.wikipedia.org/wiki/Gestational_age; National-Health-Service 2021. Screening for down’s syndrome, edwards’ syndrome and patau’s syndrome. https://www.nhs.uk/pregnancy/your- pregnancy- care

Challenges of US biometric measurements

  • Operator dependant,
  • Clinical system dependant,
  • Fetal position,
  • Similar morphological and echogenic characteristics in the US,
  • Few public datasets are available (we have only found two)

Sciortino et al. in Computers in Biology and Medicine 2017 https://doi.org/10.1016/j.compbiomed.2017.01.008; He et al. in Front. Med. 2021 https://doi.org/10.3389/fmed.2021.729978

TransThalamic

Fetal Brain Ultrasound Image Dataset

Burgos-Artizzu, X et al. (2020). FETAL PLANES DB: Common maternal-fetal ultrasound images [Data set]. In Nature Scientific Reports (1.0, Vol. 10, p. 10200). Zenodo. https://doi.org/10.5281/zenodo.3904280

TransCerebellum Plain

Fetal Brain Ultrasound Image Dataset

Burgos-Artizzu, X et al. (2020). FETAL PLANES DB: Common maternal-fetal ultrasound images [Data set]. In Nature Scientific Reports (1.0, Vol. 10, p. 10200). Zenodo. https://doi.org/10.5281/zenodo.3904280

TransVentricular Plane

Fetal Brain Ultrasound Image Dataset

Burgos-Artizzu, X et al. (2020). FETAL PLANES DB: Common maternal-fetal ultrasound images [Data set]. In Nature Scientific Reports (1.0, Vol. 10, p. 10200). Zenodo. https://doi.org/10.5281/zenodo.3904280

Research Questions

  • Research and implement deep learning methods for generating synthetic fetal ultrasound images for both normal and abnormal cases,
  • Propose and apply methods to evaluate quantitative and qualitative images of fetal us image synthesis.

GAN-based fetal imaging

  1. Bautista et al. 2022, ”Empirical Study of Quality Image Assessment for Synthesis of Fetal Head Ultrasound Imaging with DCGANs” MIUA https://github.com/budai4medtech/miua2022 (b) Liu et al. 2021 ”Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis” https://arxiv.org/abs/2101.04775

AI/ML pipeline

Image Quality Assessment

Quaility of synthesised images are evaluated with Frechet inception distance (FID), measuring the distance between distributions of synthetised and original images (Heusel et al., 2017).

The lower the FID number is, the more similar the synthetised images are to the original ones. FID metric showed to work well with fetal head US compared to other metrics (Bautista et al., 2012).

M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/budai4medtech/midl2023

Methods

Diffusion-Super-Resolution-GAN (DSR-GAN) Transformer-based-GAN

M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/budai4medtech/midl2023

Experiments: Design and results

M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/budai4medtech/midl2023

Experiments: Design and results

M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/budai4medtech/midl2023

github.com/budai4medtech/midl2023

M. Iskandar et al. “Towards Realistic Ultrasound Fetal Brain Imaging Synthesis” in MIDL2023. https://github.com/budai4medtech/midl2023

Fetal US imaging with Diffusion models

  1. Ho et al. 2020 ”Denoising Diffusion Probabilistic Models” https://arxiv.org/abs/2006.11239
  2. Fiorentino et al. 2022 ”A Review on Deep Learning Algorithms for Fetal Ultrasound-Image Analysis” https://arxiv.org/abs/2201.12260

xfetus 👶 🧠 🤖

A library for ultrasound fetal imaging synthesis using:

  • GANs,
  • transformers, and
  • diffusion models.

https://github.com/budai4medtech/xfetus

Developing real-time AI applications for diagnosis

Real-time AI Applications for Surgery

Figure 2: Development and deployment pipeline for real-time AI apps for surgery

NVIDIA Holoscan platform

Holoscan Core Concepts

Figure 3: Operator: An operator is the most basic unit of work in this framework.

Bring Your Own Model (BYOM)

Figure 4: Connecting Operators
import os
from argparse import ArgumentParser

from holoscan.core import Application

from holoscan.operators import (
    FormatConverterOp,
    HolovizOp,
    InferenceOp,
    SegmentationPostprocessorOp,
    VideoStreamReplayerOp,
)
from holoscan.resources import UnboundedAllocator


class BYOMApp(Application):
    def __init__(self, data):
        """Initialize the application

Parameters
----------
data : Location to the data
"""

        super().__init__()

        # set name
        self.name = "BYOM App"

        if data == "none":
            data = os.environ.get("HOLOSCAN_INPUT_PATH", "../data")

        self.sample_data_path = data

        self.model_path = os.path.join(os.path.dirname(__file__), "../model")
        self.model_path_map = {
            "byom_model": os.path.join(self.model_path, "identity_model.onnx"),
        }

        self.video_dir = os.path.join(self.sample_data_path, "racerx")
        if not os.path.exists(self.video_dir):
            raise ValueError(f"Could not find video data:{self.video_dir=}")

# Define the workflow
        self.add_flow(source, viz, {("output", "receivers")})
        self.add_flow(source, preprocessor, {("output", "source_video")})
        self.add_flow(preprocessor, inference, {("tensor", "receivers")})
        self.add_flow(inference, postprocessor, {("transmitter", "in_tensor")})
        self.add_flow(postprocessor, viz, {("out_tensor", "receivers")})


def main(config_file, data):
    app = BYOMApp(data=data)
    # if the --config command line argument was provided, it will override this config_file
    app.config(config_file)
    app.run()


if __name__ == "__main__":
    # Parse args
    parser = ArgumentParser(description="BYOM demo application.")
    parser.add_argument(
        "-d",
        "--data",
        default="none",
        help=("Set the data path"),
    )

    args = parser.parse_args()
    config_file = os.path.join(os.path.dirname(__file__), "byom.yaml")
    main(config_file=config_file, data=args.data)
%YAML 1.2
replayer:  # VideoStreamReplayer
  basename: "racerx"
  frame_rate: 0 # as specified in timestamps
  repeat: true # default: false
  realtime: true # default: true
  count: 0 # default: 0 (no frame count restriction)

preprocessor:  # FormatConverter
  out_tensor_name: source_video
  out_dtype: "float32"
  resize_width: 512
  resize_height: 512

inference:  # Inference
  backend: "trt"
  pre_processor_map:
    "byom_model": ["source_video"]
  inference_map:
    "byom_model": ["output"]

postprocessor:  # SegmentationPostprocessor
  in_tensor_name: output
  # network_output_type: None
  data_format: nchw

viz:  # Holoviz
  width: 854
  height: 480
  color_lut: [
    [0.65, 0.81, 0.89, 0.1],
    ]

Use cases: Real-time AI diagnosis for endoscopic pituitary surgery

⚕️ Endoscopic Pituitary Surgery

real-time-ai-for-surgery

Getting started docs

Figure 5: Getting started documentation provide with a range of links to setup, use, run and debug application including github workflow.

real-time-ai-for-surgery

🏥 Endoscopic pituitary surgery

real-time-ai-for-surgery

🏥 Endoscopic pituitary surgery

multi-ai.py

...

        # Define the workflow
        if is_v4l2:
            self.add_flow(source, viz, {("signal", "receivers")})
            self.add_flow(source, preprocessor_v4l2, {("signal", "source_video")})
            self.add_flow(source, preprocessor_phasenet_v4l2, {("signal", "source_video")})
            for op in [preprocessor_v4l2, preprocessor_phasenet_v4l2]:
                self.add_flow(op, multi_ai_inference_v4l2, {("", "receivers")})
            ### connect infereceOp to postprocessors
            self.add_flow(
                multi_ai_inference_v4l2, multiheadOp, {("transmitter", "in_tensor_postproOp")}
            )
            self.add_flow(multi_ai_inference_v4l2, segpostprocessor, {("transmitter", "")})
            self.add_flow(multi_ai_inference_v4l2, phasenetOp, {("", "in")})

        else:
            self.add_flow(source, viz, {("", "receivers")})
            self.add_flow(source, preprocessor_replayer, {("output", "source_video")})
            self.add_flow(source, preprocessor_phasenet_replayer, {("output", "source_video")})
            for op in [preprocessor_replayer, preprocessor_phasenet_replayer]:
                self.add_flow(op, multi_ai_inference_replayer, {("", "receivers")})
            ### connect infereceOp to postprocessors
            self.add_flow(
                multi_ai_inference_replayer, multiheadOp, {("transmitter", "in_tensor_postproOp")}
            )
            self.add_flow(multi_ai_inference_replayer, segpostprocessor, {("transmitter", "")})
            self.add_flow(multi_ai_inference_replayer, phasenetOp, {("", "in")})

        ## connect postprocessors outputs for visualisation with holoviz
        self.add_flow(multiheadOp, viz, {("out_tensor_postproOp", "receivers")})
        self.add_flow(segpostprocessor, viz, {("", "receivers")})
        self.add_flow(phasenetOp, viz, {("out", "receivers")})
        self.add_flow(phasenetOp, viz, {("output_specs", "input_specs")})

...
multi-ai.yaml

...

 multi_ai_inference_v4l2:
  #
  #
  # Multi-AI Inference Operator InferenceOp()
  #
  #
  backend: "trt"
  pre_processor_map:
    "pit_surg_model": ["prepro_v4l2"]
    "phasenet_model": ["prepro_PNv4l2"]
  inference_map:
    "pit_surg_model": ["segmentation_masks", "landmarks"]
    "phasenet_model": ["out"]
  enable_fp16: False
  parallel_inference: true # optional param, default to true
  infer_on_cpu: false # optional param, default to false
  input_on_cuda: true # optional param, default to true
  output_on_cuda: true # optional param, default to true
  transmit_on_cuda: true # optional param, default to true
  is_engine_path: false # optional param, default to false

multi_ai_inference_replayer:
  #
  #
  # Multi-AI Inference Operator InferenceOp()
  #
  #
  backend: "trt"
  pre_processor_map:
    "pit_surg_model": ["prepro_replayer"]
    "phasenet_model": ["prepro_PNreplayer"]
  inference_map:
    "pit_surg_model": ["segmentation_masks", "landmarks"]
    "phasenet_model": ["out"]
  enable_fp16: False
  parallel_inference: true # optional param, default to true
  infer_on_cpu: false # optional param, default to false
  input_on_cuda: true # optional param, default to true
  output_on_cuda: true # optional param, default to true
  transmit_on_cuda: true # optional param, default to true
  is_engine_path: false # optional param, default to false

...

real-time-ai-for-surgery

🤝 Contributing

Figure 6: real-time-ai-for-surgery follows the Contributor Covenant Code of Conduct. Contributions, issues and feature requests are welcome.

real-time-ai-for-surgery

GitHub templates

real-time-ai-for-surgery

Release version summaries

Use cases: Real-time AI diagnosis for eye movement disorders

🤖 👀 AI in ophthalmic imaging modalities

Figure 7: Practical application of AI in all common ophthalmic imaging modalities

Li, Zhongwen, Lei Wang, Xuefang Wu, Jiewei Jiang, Wei Qiang, He Xie, Hongjian Zhou, Shanjun Wu, Yi Shao, and Wei Chen. “Artificial intelligence in ophthalmology: The path to the real-world clinic.” Cell Reports Medicine 4, no. 7 (2023).

🤖 👀 Eye movement disorders

  • Nystagmus is an eye movement disorder characterized by involunatry eye oscillations.​
  • Pathologic nystagmus is estimated to be 24 per 10,000 with a slight predilection toward European ancestry [1]​.

[1] Sarvananthan, Nagini, Mylvaganam Surendran, Eryl O. Roberts, Sunila Jain, Shery Thomas, Nitant Shah, Frank A. Proudlock et al. “The prevalence of nystagmus: the Leicestershire nystagmus survey.” Investigative ophthalmology & visual science 50, no. 11 (2009): 5201-5206.​

🤖 👀 Real-time AI Diagnosis for Nystagmus

🤖 👀 Real-time AI Diagnosis for Nystagmus

Future work

  • Real-time AI guidance for high-quality images (Liu et al. 2023)
  • Implement UNET-Visual Transformer models (Yao et al. 2022)

Liu, L., Wu, X., Lin, D., Zhao, L., Li, M., Yun, D., Lin, Z., Pang, J., Li, L., Wu, Y. and Lai, W., 2023. DeepFundus: a flow-cytometry-like image quality classifier for boosting the whole life cycle of medical artificial intelligence. Cell Reports Medicine, 4(2).

Yao, Chang, Menghan Hu, Qingli Li, Guangtao Zhai, and Xiao-Ping Zhang. “Transclaw u-net: claw u-net with transformers for medical image segmentation.” In 2022 5th International Conference on Information Communication and Signal Processing (ICICSP), pp. 280-284. IEEE, 2022.

Open-Source Software in Healthcare

The First Regulatory Clearance of an Open-Source Automated Insulin Delivery Algorithm

  • In 2018, Tidepool launched the Tidepool Loop initiative to generate real-world evidence and seek regulatory clearance for Loop.
  • By late 2020, Tidepool submitted an application to the FDA for an interoperable automated glycemic controller (iAGC) based on Loop.
  • After 2 years, the FDA approved the Tidepool Loop iAGC on January 23, 2023.

Braune, Katarina, Sufyan Hussain, and Rayhan Lal. “The first regulatory clearance of an open-source automated insulin delivery algorithm.” Journal of Diabetes Science and Technology 17, no. 5 (2023): 1139-1141. DOI Citations

https://www.tidepool.org/open

https://github.com/tidepool-org

https://github.com/LoopKit

Key Takeaways

  • 🔧 🏥 Challenges in translating research from bench to bedside.
  • 🤖 ♻️ Use cases for synthetic data and real-time AI-driven diagnosis.

YES! We can Revolutionise Healthcare with Open-Source Software!!!

  • 🎒 Contribute to the creation of high-quality educational and training materials.
  • Release open-source code, data, and models in alignment with quality standards.

🙌 Acknowledgements

  • Diego Kaski
    • UCL Queen Square Institute of Neurology
  • Zhehua Mao, Sophia Bano and Matt Clarkson
    • Wellcome / EPSRC Centre for Interventional and Surgical Sciences (WEISS) at UCL
  • Mikael Brudfors and Nadim Daher
    • NVIDIA Healthcare AI
  • Steve Thompson et al.
    • Advanced Research Computing Centre (ARC) at UCL