Jenny Ji Hyun Kim

Surgical Planning | Computer Vision

Towards Automatic Generation of Patient-specific Spino-pelvic 3D Models from

Bi-planar X-Ray

[2021 – 2022]

Spine Biomechanics Laboratory

Balgrist University Hospital | ETH Zurich

Principal Investigator: Jonas Widmer, PhD

Accepted to MIDL 2022

EOS Capture

EOS Output [Biplanar X-Rays]

Clinician Annotated [Frontal]

Clinician Annotated [Sagittal]

ABSTRACT

This project aims to create a pre-operative surgical planning tool through generation of patient-specific, 3-dimensional (3D) musculoskeletal models of the spinopelvic region. The main data-set consists of 450 annotated images captured using the EOS imaging system. A base-model for medical image segmentation based on the U-Net architecture is trained using the main data set. A supplemental, publicly available data set of 90 segmented computed tomography (CT) images is used to create ground truth segmentation data and corresponding digitally reconstructed radiograph (DRR) images. Using a Cycle Generative Adversarial Network (cycleGAN), the DRR images are style-transferred to X-ray-styled images. Using the new, improved data set, transfer learning techniques are applied to fine-tune the model. Different bony segments such as the pelvis, sacrum, spine and ribcage are segmented on the two planes. The bi-planar segmentation is inputted in the 3D-reconstruction model to generate a patient-specific 3D model.

Workflow Diagram

CONTRIBUTIONS

Previous work from the Spine Biomechanics Lab took bi-planar EOS images and outputted spino-pelvic parameters used in surgical planning such as: Cobb angles, thoracic kyphosis, lumbar lordosis, sagittal vertical axis, sacral slope, pelvic tilt and pelvic incidence. The input data was annotated by trained clinicians – the ‘masks’ created from the annotations were good approximate representations of the boney segments. The U-Net model was trained for bony segment classification using a dataset of 450 annotated masks and images. Subsequently, image processing techniques were used on the predicted on the predicted regions to calculate the spino-pelvic parameters.

My work involved further refining the model to improve the accuracy of the model. This was achieved through utilizing pre-processing techniques such as the ‘active contour model’ to improve the clinician annotated masks. Additionally, representation ‘ground truths’ were achieved from a pelvic CT data set which was converted to the same data format as the input dataset: it was transformed to 2D bi-planar DRR through projection and converted to X-Ray style using CycleGAN to style transfer (recall the example of horse to zebra transformation in Computer Vision courses). The base-model was refined using transfer-learning techniques with the supplementary CT converted dataset. The final output segmentations of the test were post-processed using image processing techniques such as hole filling and morphological opening.

The fine-tuned model performed with an improved f1-score by 12% to 96.7%.

model performance improvement of 1

MOTIVATION

My motivation was to harness generative AI techniques to enhance the surgical planning processes for spinal surgeries. As an engineer with a strong visual and artistic inclination, I found it fascinating to apply computer vision techniques to utilize the information provided by a group of pixels to tangible information – such as personalized pelvis segment prediction. It was a groundbreaking experience to transform intensities, lines, and corners into meaningful data for critical and practical uses, such as in surgical planning.

DRR Images Style Transferred to X-Ray

Input DRR – Style Transferred | Ground Truth | Base-Model Prediction

Input DRR – Style Transferred | Ground Truth | Fine Tuned Model Prediction

Input X-Ray – Style Transferred | Ground Truth | Base-Model Prediction

Input X-Ray – Style Transferred | Ground Truth | Fine Tuned Model Prediction

In the abstract below, we propose a 3D reconstruction strategy using shape priors.

Made in Palo Alto with lots of Sunshine and Coffee