Clinical evidence that drives the development of our innovative AI technology

Search
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Clinical Validation
August 14, 2022
FMFI Fetal Neuroimaging Conference
The Role of Artificial Intelligence in Screening and Democratizing Quality Prenatal Care: A Retrospective Validation on Sonograms of Fetal Brain

Abstract

With the emergence of Artificial Intelligence (AI), assistive technologies leveraging them could be highly beneficial in low-resource and remote settings that lack well-trained clinicians and operators, enabling timely referrals for suspected abnormalities and high-risk pregnancies. In this study, we validated the Origin Health Examination Assistant (OHEA), a software system comprising more than 10 AI algorithms to assist in assessing mid-trimester fetal brain exams (axial; transventricular, and transcerebellar).

We retrospectively obtained a test set of 222 singleton exams (543 frozen frame images and 39 2D-cine loops; 75.2 % normal, 24.8% abnormal) from a single tertiary fetal care center. The OHEA analyzed each exam for quality (appropriateness in magnification), and performed an anatomical survey (anatomical landmarks visualized/not visualized; assessing if structurally normal/abnormal for the gestational age). Further, a panel of 2 maternal-fetal-medicine (MFM) specialists selected appropriate images from each exam where OHEA automatically placed the caliper points and obtained key fetal brain measurements. The standard of reference for the study was the consensus from a panel of 7 MFMs who reviewed each of these exams.

The OHEA demonstrated an excellent accuracy of 97.3% in the assessment of examination quality, 98.2% in the anatomical survey, and excellent agreement (with respect to the MFM panel; intra-class correlation coefficient > 0.90 for all cases) in obtaining key measurements (BPD, HC, OFD, NFT, TCD, AW). The OHEA achieved an overall screening (classifying as normal and abnormal) performance of 0.95 and 0.80 in sensitivity and specificity. Specifically, OHEA could detect (sensitivity, specificity) choroid plexus cyst (0.97, 0.83), absent cavum septum pellucidum (0.87, 0.71), absent midline falx (1.0, 1.0), enlarged cisterna magna (1.0, 1.0), and dilated lateral cerebral ventricles (0.96, 0.96) with high accuracy.

The clinical translation of such assistive technology can help clinicians and operators implement and deliver standardized and high-quality prenatal examinations in low-resource settings. In future studies, we aim to improve clinical performance, test on larger populations, and assess performance on exams obtained by novice users.

Figure 1: Performance of Origin Health Examination AssistantTM

Note: Origin Health Examination AssistantTM (OHEA) is now Origin Medical EXAM ASSISTANTTM (OMEA).

Clinical Validation
October 17, 2021
31st World Congress on Ultrasound in Obstetrics and Gynecology
A Multi-centre, Multi-Device Validation of a Deep Learning System for the Automated Segmentation of Fetal Brain Structures from Two-Dimensional Ultrasound Images

Objective

To validate (multicentre, multi-device) the robustness of a single deep learning (DL) system for the simultaneous and automated segmentation of 10 key fetal brain structures from multiple planes (transventricular [TV] and transcerebellar [TC]).

Methods

We retrospectively obtained 4,190 two-dimensional (2D) ultrasonography (USG) images (1349 pregnancies; TV + TC images) from 3 centres (2 tertiary referral centre [TRC 1,2] + 1 routine imaging centre [RIC]) using 6 ultrasound (USG) devices (GE Voluson: P8,P6,E8,E10,S10; Samsung: HERA W10). A custom U-Net was trained (2744 images from TRC 1 [E8, S10]) on 2D fetal brain images (TV + TC images) and their corresponding manual segmentations to segment 10 key fetal structures (TV + TC planes). We assessed the robustness (operator & centre variability) and generalisability (across devices) of the proposed approach across 4 independent (unseen) test sets. Test set 1 (TRC 1, trained devices): 718 images (E8, S10); test 2 (TRC 1, unseen devices): 192 images (HERA W10, P6, E10); test set 3 (TRC 2, trained device): 378 images (E8), and test set 4 (RIC, unseen device): 158 images (P8). The segmentation performance was qualitatively and quantitatively (Dice coefficient [DC]) assessed.

Results

Irrespective of the USG device/centre, the DL segmentations were qualitatively comparable to their manual segmentations. The mean (10 structures; test sets 1/2/3/4) DC were: 0.83 ± 0.09/0.80 ± 0.08/0.75 ± 0.09/0.80 ± 0.07.

Conclusion

The proposed DL system offered a promising and generalisable performance (multi centres, multi-device). Its clinical translation can assist a wide range of users across settings to deliver standardized and quality prenatal examinations.

Figure 1: Methodology of ultrasound imaging and dataset preparation
Figure 2: Results obtained from the study
Clinical Validation
April 4, 2022
SPIE Medical Imaging
Towards a Device-Independent Deep Learning Approach for the Automated Segmentation of Sonographic Fetal Brain Structures: A Multi-Center and Multi-Device Validation

Abstract

Access to quality prenatal ultrasonography (USG) is limited by a number of well-trained fetal sonographers. By leveraging on deep learning (DL), we can assist even novice users in delivering standardized and quality prenatal USG examinations, necessary for the timely screening and specialists referrals in case of fetal anomalies. We propose a DL framework to segment 10 key fetal brain structures across 2 axial views necessary for the standardized USG examination.

Despite training on images from only 1 center (2 USG devices), our DL model was able to generalize well even on unseen devices from other centers. The use of domain-specific data augmentation significantly improved the segmentation performance across test sets and across other benchmarking DL models as well. We believe, our work opens doors for the development of device-independent and robust models, a necessity for seamless clinical translation and deployment.

Pilots
June 30, 2022
19th World Congress in Fetal Medicine, Fetal Medicine Foundation, Crete, Greece
Feasibility and Validation of an Artificial Intelligence Based Software to Mimic a Clinically Relevant Approach for Anatomical Assessment of 2D Fetal Neurosonogram Video Loops - A Retrospective Multi-Reader Study

Objective
To clinically validate Origin Health Examination Assistant (OHEA), an artificial intelligence (AI) based system that mimics a clinically relevant approach to systematically and comprehensively assess fetal anatomy from 2D ultrasound video loops.

Methods
The Origin Health Examination Assistant (OHEA) comprises more than 10 different artificial intelligence (AI) based algorithms working together to perform a thorough and comprehensive anatomical assessment of the mid-trimester (18-24 weeks) fetal brain (axial and sagittal views) systematically. In each 2D video loop, the examination quality is first assessed prior to detecting standard diagnostic views. From the standard diagnostic views, relevant fetal anatomical landmarks are detected and assessed for their visibility. The AI algorithms were trained and clinically validated on a large and expert annotated dataset of 39,420 (2,249 patients) and 11,220 (311 patients) 2D ultrasound images of the fetal brain axial and sagittal views, respectively, obtained from a single tertiary fetal care centre. We used an empirical approach to measure the confidence of each AI algorithm at every stage to ensure reliability and quality control to identify incomplete examinations.

The OHEA assessed an entire 2D video loop (average: 170 frames) in under 25 seconds (Nvidia T4 16GB GPU). On an unseen external test set of 93 axial and 27 sagittal mid-trimester examinations (2D video loops) acquired from a single tertiary fetal care centre between July 2019 and February 2022, a reader panel of 7 clinicians (6 OBGYNs and 1 Radiologist) trained in fetal medicine benchmarked the performance of OHEA. Every clinician in the reader panel was allowed to accept or modify the findings provided by the OHEA. The sequence followed consisted of assessing examination quality (minimum magnification of 50%), detecting 3 standard diagnostic views (transcerebellar [TC], transventricular [TV], mid-sagittal [MS]), detecting and verifying the visibility of 10 key fetal anatomical structures (cranium, cavum septum pellucidum (CSP), midline falx, cerebellar lobes, cisterna magna, nuchal fold, choroid plexus, lateral cerebral ventricle, corpus callosum, and vermis).

To demonstrate the robustness of the OHEA, the external test set included both normal (95 cases) and abnormal cases (14 cases; 1 enlarged cisterna magna, 3 increased nuchal fold thickness, 6 choroid plexus cyst, 1 agenesis of the corpus callosum, 3 partial agenesis of the corpus callosum, and 1 vermian hypoplasia). A majority consensus from the reader panel was used as the gold standard to benchmark the performance of the OHEA. We used accuracy to benchmark the performance and inter-rater reliability Cohen’s Kappa “𝞳” to quantify the reader panel's agreement.

Results
When benchmarked against the reader panel for both the normal and abnormal cases, we observed a high accuracy and an excellent agreement (𝞳 value; interpretation: 0.84-1.0 - excellent agreement) of 97.3 % (𝞳=0.91) in the assessment of examination quality, 97.5% (𝞳=0.93) in the detection of standard diagnostic views (98.9% [𝞳=0.95] for TV, 100 % [𝞳=1.0] for TC, and 93.7% [𝞳=0.84)] for MS views), and 98.2 % (𝞳=0.95) in the detection and verification of the anatomical structures (100% [𝞳=1.0] for Cranium, 97.8% [𝞳=0.94] for CSP, 100% [𝞳=1.0] for midline falx, 100% [𝞳=1.0] for cerebellar lobes, 100% [𝞳=1.0] for cisterna magna, 98.9% [𝞳=1.0] for nuchal fold, 98.9% [𝞳=0.95] for choroid plexus, 98.9% [𝞳=0.96] for lateral ventricle, 93.7% [𝞳=0.88] for corpus callosum, and 93.7% [𝞳=0.87] for vermis).

Conclusion
We have demonstrated the feasibility of developing and validating an AI system for the clinically relevant and systematic approach to assessing fetal anatomy. We believe such assistive technologies could be highly critical in low-resource and remote settings that lack well-trained clinicians and operators, enabling timely referrals for suspected abnormalities and high-risk pregnancies. In high-volume clinical practices, assistive technologies that can mimic a clinically relevant approach to automated assessment can help faster examination and reading times and reduce operator fatigue and burnout.

Note: Origin Health Examination AssistantTM (OHEA) is now Origin Medical EXAM ASSISTANTTM (OMEA).

Clinical Validation
August 22, 2021
Singapore International Congress of O&G
Development and Validation of an Artificial Intelligence Based System for the Automated Detection of Choroid Plexus Cyst from Fetal Cranial Sonograms: A Multi-Center Retrospective Study

Background

The ‘choroid plexus (CP) cyst’ refers to the formation of a small round fluid-filled area in the choroid plexus. Although the isolated finding of CP cyst does not alter the management of pregnancies, it is strongly associated with multiple other anomalies and is an important marker for Trisomy 18. Fetuses with trisomy 18 have CP cysts about one-third of the time. The automated detection of CP cysts seen during mid-trimester (1-2% of the cases) ultrasonography (USG) examination is critical in settings that lack well-trained sonographers to provide tertiary/specialist centers referrals for a detailed search of associated anomalies or rule-out as normal variants.

Methods

A total of 2673 2D USG images (non-cystic/cystic CP: 2470/203) of the transventricular (TV) plane were retrospectively obtained from 848 subjects (targeted mid-trimester scans) at 2 tertiary referral centers using 3 commercial ultrasound devices (General Electric [GE] Healthcare; GE Voluson E8/P8/S10). We propose a two-step AI approach for the automated detection of the CP cyst (Step 1: segmentation of the CP from 2D TV USG images; Step 2: classification of the segmented CP as cystic/non-cystic).  The segmentation AI network (U-Net based) was trained and tested (performance evaluated using Dice coefficient [scale = 0: no-overlap; 1.0: complete overlap; comparison against manual segmentations]) on 1582 and 588 images respectively. The classification AI network (ResNet 18 based) was trained and tested on 122 and 381 images (equal cystic/non-cystic) to classify the segmented regions as cystic/non-cystic. Sensitivity, specificity, and area under the receiver operating characteristics curve (AUC) were used to evaluate the classifier performance. Clinical impressions (reviewed by fetal medicine specialists) from USG scan reports were used (ground truth) for training and testing the AI networks. We ensured that there was no duplication in data and patient overlap (between datasets) and performed class balancing (USG device; and cystic/non-cystic). 

Results

The CP segmentation network achieved a Dice coefficient of 0.85. The cystic/non-cystic classifier achieved a sensitivity, specificity, and AUC of 0.86, 0.89, and 0.94 respectively.

Conclusion

We developed and validated a fully-automated AI system for the automated detection of CP cysts from 2D USG images of the fetal brain. The clinical translation of such frameworks can help expecting mothers in low-resource settings to receive timely referrals for detailed examination.

Figure 1: Qualitative analysis of the DL system using Grad-CAM. Baseline images (top row) and the corresponding deep learning predictions are shown for 4 cases. The colormap scale to the right corresponds to the confidence of deep learning systems to detect the CP cyst in various regions (red [1] = high degree of confidence; purple [0] = low degree of confidence).
We observe that, in both the unilateral and bilateral cases of the CP cyst, the system successfully segments the CP (in white) and classifies it as cystic/non-cystic correctly.


Clinical Validation
March 31, 2022
IEEE 19th International Symposium on Biomedical Imaging
Leveraging Clinically Relevant Biometric Constraints to Supervise a Deep Learning Model for the Accurate Caliper Placement to Obtain Sonographic Measurements of the Fetal Brain

Purpose

To develop a deep learning (DL) system for the automated caliper placement to obtain key sonographic measurements of the fetal brain and evaluate the effect of leveraging clinically relevant biometric constraints and domain-relevant data augmentations in its performance.

Methods

A total of 1192 images (596 Transcerebellar, 596 Transventricular) were retrospectively obtained from 473 mid-trimester USG examinations (18-24 weeks; transabdominal scans) at 3 centers (2 tertiary referral centers and 1 routine imaging center) using GE Voluson E8, S10, and P8 USG machines. For all the training images, the caliper positions of 4 measurements (TV plane: atrial width [AW]; TC plane: transcerebellar diameter [TCD], nuchal fold thickness [NFT], cisterna magna size [CMS]) were provided by medical expert annotators based on internationally prescribed guidelines. We trained a DL system (U-Net based) to automatically predict the caliper positions using the expert annotated data and computed the corresponding biometric measurements as the euclidean distance between them. The DL system performance was assessed on an unseen test of 145 images (145 pregnancies) annotated by 7 experienced clinicians.

The mean euclidean error for each caliper position, the euclidean error between each biometric measurement (DL vs 7 clinicians), and the absolute agreement (intra-class correlation coefficients [ICC]; two-way random; single rater) were used as the performance assessment metrics. Additionally, the effect of leveraging clinically relevant constraints and domain-relevant data augmentations were tested across three different architectures to demonstrate the generalizability of the approach. 

Results

The mean euclidean error across 4 measurements was 0.88+-0.59mm and the DL system was in good to excellent agreement with the 7 clinicians. The proposed biometric constraint and domain relevant data augmentations improved the performance by 3 and 6 percent across three different architectures.

Conclusion

Traditional approaches for obtaining automated measurements through computer vision depend on the quality of automated segmentation. Our approach eliminates this need by directly obtaining the calliper points by modeling the problem statement as a “landmark detection” problem. This eliminates the need to prepare expensive datasets (for segmentation based approach), and opens doors as a vastly generalizable and reusable framework for obtaining any measurements directly from merely landmark points without the need to develop custom computer vision algorithms. Clinically, we believe that the successful clinical translation of the proposed framework can assist novice users in the accurate and standardized assessment of fetal brain USG examinations to aid the screening of CNS anomalies.

No Items Found

Discover how
Origin Medical EXAM ASSISTANT™ can advance your approach to high-quality prenatal care.