Deep learning for image segmentation and registration in radiotherapy

Student thesis: Phd

Abstract

Radiotherapy is used in the treatment of 80% of head and neck (HN) cancers. The goal of radiotherapy is to deliver the prescribed dose to regions affected by cancer, or clinical targets, whilst minimising irradiation of surrounding healthy tissues to reduce the risks of side-effects for the patient. For treatment planning, a patient is imaged, often with a computed tomography (CT) scan, and the targets and nearby organs-at-risk (OARs) are then outlined or contoured. Contouring is a highly subjective task, thereby impacting reproducibility (inter-observer variability) and repeatability (intra-observer variability) even with strict guidelines in place. Automatic OAR contouring, known as auto-segmentation, with deep learning (DL)-based approaches is becoming standard practice to reduce the clinician workload and improve consistency. However, as all auto-segmentations require manual review, the effective efficiency of automated workflows is currently hindered. The first aims of this project were to develop a DL-based auto-segmentation method that could be trained effectively with a small dataset, investigate the impact of training dataset size and consistency, and develop an automated quality assurance (QA) method that could expedite the segmentation review process. Improvements in execution speed by DL methods are also benefitting image registration, in particular for non-rigid registration (NRR). NRR is a key tool in radiotherapy, used to align the data of multiple images for several tasks including spatial evaluation of treatment response, dose mapping and adaptive treatment techniques. DL methods provide order-of-magnitude time savings over traditional algorithms, but current approaches struggle to cope with large anatomical differences. The final project aims were to enable a learning-based NRR method to use anatomical information to better handle such differences. Highlights of the work presented in this thesis, in chapter order, include: - A three-dimensional (3D) CNN model developed using a preliminary localisation network and a multi-channel input approach to maximise contrast of relevant anatomical regions. The resulting model performed comparably with inter-observer deviations when trained with just 34 HN CT scans. - An investigation of the impact of training dataset size for HN OAR auto-segmentation in which multiple CNNs were trained with datasets ranging in size from 25-1000 CT scans. Segmentation performance increased with dataset size, but beyond 250 scans the gains became negligible. - A further investigation on the effect of the consistency of the training examples. A model trained with a small dataset of highly consistent segmentations outperformed the same architecture trained with a much larger set of segmentations with a lower consistency. - An automated QA method to assess the quality of HN OAR segmentations. A hybrid CNN and geometric learning architecture was devised that leverages imaging and shape information to predict where input parotid gland segmentations differ from the ESTRO guidelines. The model was further tested on more than 1000 segmentations from manual and automated sources, predicting errors that correspond with known patterns of inter-observer variation. - A geometric learning model that estimates dense correspondences between 3D HN organs. The method, originally developed for generic natural shapes, was improved through the inclusion of imaging as an additional loss function term. This learning-based correspondence method outperformed a classical NRR algorithm at matching corresponding anatomical landmarks, showing its potential to be utilised within an anatomically-informed NRR approach. - An anatomically-informed learning-based registration framework. The correspondence model was adapted to provide a non-rigid initialisation using a thin plate spline deformation model prior to NRR with an established CNN model. The proposed anatomically-informed initialisation led to the CNN approach performing comparably with a traditional iterative NRR algorithm while maintaining a substantial speed advantage. The presented works could benefit multiple areas in radiotherapy. Direct uses in the treatment pathway include auto-segmentation of newly defined structures, automatic QA of contours, and accurate dose mapping in the re-irradiation setting. Additionally, the anatomically-informed NRR framework could be used to explore and establish spatially-informed dose-response relationships to improve future radiotherapy.
Date of Award13 Feb 2024
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorEliana Vasquez Osorio (Supervisor), Andrew Green (Supervisor) & Marcel Van Herk (Supervisor)

Keywords

  • head and neck radiotherapy
  • image segmentation
  • image registration
  • head and neck cancer
  • contouring quality assurance
  • correspondence
  • machine learning
  • auto-contouring
  • convolutional neural networks
  • deformable registration
  • non-rigid registration
  • OAR contouring
  • auto-segmentation

Cite this

'