Hi, I am Miguel. I am currently a Senior Research Engineer at University College London, UK, where I lead pioneering advancements in data-centric AI tools for Medical Imaging, MedTech, SurgTech, Biomechanics, and Clinical Translation. My work focuses on driving innovation and delivering impact in several critical areas: Real-time AI for surgery, eye movement disorders, and echocardiography, Sensor fusion data integrating wearable trackers with medical imaging, Generative models for fetal imaging, and Child-robot interaction in low-resource countries. By harnessing these cutting-edge technologies, I am dedicated to transforming healthcare through AI and making a lasting impact on patient care and medical research.
Short-bio & cv >
I am currently a Senior Research Engineer at University College London at the Advanced Research Computing Centre , advancing real-time AI-enabled surgical tools by developing end-to-end frameworks, including training, inference, API design, and deployment to medical devices (see slides). Furthermore, I am driving improvements in the quality and reproducibility of UCL's open-source software projects, ensuring they meet the highest standards of excellence. (see PIXL Image eXtraction Laboratory and Surgical Navigation tools). Similarly, I contributed to maintain AI-based Surgical Navigation tools in collaboration with Stephen Thompson, Dr Thomas Dowrick and Prof Matt Clarkson from WEISS.
Previously, I was a Research Associate at King's College London within the School of Biomedical Engineering and Imaging Sciences where in the Vietnam ICU Translational Applications Laboratory (VITAL) project between September 2021 to September 2022, I scientifically contributed to automatic biometric recognition from Cardiac ultrasound data using deep learning techniques, these in collaboration with Dr Alberto Gomez and Dr Andrew King. Before that, as a Research Associate between April 2019 to August 2021, I pushed forward the state-of-the-art of ultrasound-guided procedures in the Guided Instrumentation for Fetal Therapy and Surgery (GIFT-Surg) project where I scientifically contributed to the development of new algorithms, software, hardware and medical device quality management systems, those in close collaboration with Prof. of Interventional Image Computing Tom Vercauteren and Dr. Wenfeng Xia. In July 2019, I was awarded a Ph.D. degree in Computer Engineering from the University of Birmingham where I investigated Nonlinear Analysis to Quantify Movement Variability in Human-Humanoid Interaction under the supervision of Prof. of Pervasive and Ubiquitous Computing Chris Baber and Prof. of Information Engineering Martin Russell.
Generally speaking, I have 20 years' experience in human-robot interaction, electronics, Mechatronics, signal processing and AI in healthcare, along with 12 years' experience as a teaching associate in Mechatronics nd Computer Engineering and 5 years' experience in public engagement activities. I have passion for Open Science meaning that I open-sourced all my contributions to knowledge in my GitHub @mxochicale. I also tweet and re-tweet about AI, Robotics, Chaos, AI and Healthcare @_mxochicale, I write in my blog from time to time, and in my free time am trying to improve my juggling skills and enjoy time with my family and friends.
Curriculum Vitae
For detailed information, please see my two-page CV and its GitHub repository:
News >
- 2024-08-01: 🎉 I am excited to share that I have been promoted to Senior Research Software Engineer at the Advanced Research Computing Centre!
- 2024-03-01: Full Day Workshop at the 16th Hamlyn Symposium on Medical Robotics on "Open-Source Software for Surgical Technologies" was accepted!
- 2023-09-26: "Towards lightweight transformer-based models with multimodal data for low-latency surgical applications" has been accepted for 15 minutes presentation at the Fast Machine Learning for Science! Presentation for Fast Machine Learning for Science [Tue 26/09/2023; 17:00 - 17:15]
- 2023-09-08: Sujon Hakim presented his poster at the In2research Celebration Event, Friday 8th September, at the Royal Institution, London
- 2023-09-06: Contributed to deliver workshop on "How to use and contribute to a software sustainability dashboard"
- 2023-05-09: Qingyu Yang from the Institute of Health Informatics at UCL started her MPhil project under my supervision with the project titled: "High-resolution fetal brain ultrasound imaging synthesis".
- 2023-04-28: "Towards Realistic Ultrasound Fetal Brain Imaging Synthesis" was accepted at Medical Imaging with Deep Learning 2023 (MIDL2023)
- 2023-04-01: "Towards a Simple Framework of Skill Transfer Learning for Robotic Ultrasound-guidance Procedures" was accepted at Robot-Assisted Medical Imaging (RAMI) ICRA workshop 2023
- 2023-03-20: In2research Placement project "AI-based evaluation of surgical skills using open-source software libraries" was accepted. Placement will be 8 weeks and take place between 1 June and 31 Aug 2023.
- 2023-01-18: Xiaoning Zhu from the Institute of Health Informatics at UCL started her MPhil project under my supervision with the project titled: "Automatic Medical Image Reporting".
- 2023-03-02: "Open-Source Software for Surgical Technologies" was accepted at the Hamlyn Symposium on Medical Robotics 2023, which I am leading!
- 2022-10-10: 🎉 I've started a new role as Research Engineer @ucl_arc / @WEISS_UCL to push fordward @scikit_surgery
- 2022-09-30: Samuel Eyob and Pablo Prieto Roca successfully completed their projects on 3D fetal ultrasound synthesis for the King’s Undergraduate Research Fellowships 2022.
- 2022-09-30: Tsz Yan (Goosie) Leung successfully submitted and presented her M.Sc. in Medical Engineering and Physics, getting distition. Project title: "Prototype and Study of a Motion-Image Fusion System for Obstetrics Ultrasound Transducer Tracking".
- 2022-09-30: Michelle Iskandar successfully submitted and presented her M.Sc. in Healthcare Technologies, investigating Fetal Ultrasound Imaging Sysnthesis using DCGAN and FASTGAN.
- 2022-08-30: Thea Bautista received the Maurice Wilkins Prize for the best MEng Individual Research Project in 2022
- 2022-07-28: "Empirical Study of Quality Image Assessment for Synthesis of Fetal Head Ultrasound Imaging with DCGANs" was accepted at miua2022 conference. Paper, code data and more
- 2022-03-11: Presented "Piloting diversity and inclusion workshops in AIR4Children" at HRI2022
- 2021-10-05: Thea Bautista from the BMEIS started her Master engineering project under my supervision with the project titled: "GAN-based synthetic ultrasound imaging for fetal development".
- 2021-09-01: I joined VITAL project as Research Associate, at King's College London within the BMEIS, to push forward the area of real-time automatic biometric recognition of Cardiac ultrasound using deep learning techniques.
- 2021-07-01: I supervised two summer undergraduate projects from King’s Undergraduate Research Fellowships. Amal Hussein created new designs of 3D printed probes for an ultrasound-guidance simulator and Guilherme Gomes de Figueiredo explored the promising area of synthetic ultrasound imagining with AI
- 2021-03-08: Presented "AIR4Children: Artificial Intelligence and Robotics for Children" at HRI2021
- 2021-01-07: King's Public Engagement Award on FETUS: Finding a fETus with an Ultrasound Simulator
- 2020-10-15: Alexander Mitton, my first supervised Master student at BMEIS, won the award of outstanding individual project
- 2020-09-11: Presented "open-corTeX: A continuous integration framework for open scientific communication" at RRTS20
- 2020-04-14: King’s Health Partners grant for the project "Sensory system abnormalities in childhood dystonia" lead by Verity McClelland and in collaboration with Carlos Seneci
Projects >
List of projects I am leading (or had led):
- May 2024 - Present 🤖 👁️ READY: REal-time Ai Diagnosis for nYstagmus (GitHub repository will be open in Jan 2025!)
- Oct 2023 - Apr 2024 Real-time AI for surgery with NVIDIA-Holoscan platform: Led software reserach engineer advancing real-time AI-enabled surgical tools by developing end-to-end frameworks, including training, inference, API design, and deployment to medical devices (see slides).
- 2021-2022 GAN-based synthetic ultrasound imaging for fetal development
- 2021-2022 Towards simple and effective ultrasound-guidance procedures
- 2021-2021 Simulator of ultrasound-guided needle for public engagement events [ ]
- 2020-2021 Low-power vibrational stimulator to study evoked cortical responses
- 2020-ongoing I am innovating at "AIR4Children: Artificial Intelligence and Robotics for Children" in low resource income countries [ ].
- before-2020 You might like to have a look to some of my previous projects: demos of human-robot interaction and others.
Publications >
- E. Kerfoot and Xochicale M (2022). SofHarDevOps4BioMedEng: Hands-on Workshop on Software and Hardware Developer Operations for Biomedical Engineers . 2nd conference on Reproducibility, replicability and trust in science 2022.
- Bautista T, Matthew J, Kerdegari H, Peralta-Pereira L and Xochicale M (2022). Empirical Study of Quality Image Assessment for Synthesis of Fetal Head Ultrasound Imaging with DCGANs. 26th Conference on Medical Image Understanding and Analysis (MIUA 2022).
- Montenegro R, Corona E., Perez-Badillo D., Cruz D., Xochicale, M. (2021). AIR4Children: Artificial Intelligence and Robotics for Children. HRI2021 Proceedings of the 16th Annual Conference on Human-Robot Interaction. doi-to-be-published.
- Xochicale, M. & Baber, C. (2020). Nonlinear methods to quantify Movement Variability in Human-Humanoid Interaction Activities. arXiv. 1810.09249.
- Xochicale, M. & Baber, C. (2017). Towards the Analysis of Movement Variability in Human-Humanoid Imitation Activities. HAI17 Proceedings of the 5th International Conference on Human Agent Interaction. 10.1145/3125739.3132595.
- Xochicale, M., Baber, C. & Oussalah M. (2017). Towards the Quantification of Human-Robot Imitation Using Wearable Inertial Sensors. HRI2017 Proceedings of the 12th Annual Conference on Human-Robot Interaction. 10.1145/3029798.3038320.
- Xochicale, M., Baber, C. & Oussalah M. (2016). Analysis of the Movement Variability in Dance Activities using Wearable Sensors. WeRob2017 Proceedings of the 12th Annual Conference on Human-Robot Interaction. 10.1007/978-3-319-46532-6_25.
- Xochicale, M., Baber, C. & Oussalah M. (2016). Understanding Movement Variability of Simplistic Gestures Using an Inertial Sensor. PerDis '16 Proceedings of the 5th ACM International Symposium on Pervasive Displays. 10.1145/2914920.2940337.
Contact me >
I am always interested in exploring academic and industry opportunities. Please feel free to reach out and connect: m.xochicale@ucl.ac.uk.