Continuous U-Net: Faster, Greater and Noiseless

Chun-Wun Cheng1,2∗
Christina Runkel2∗
Lihao Liu2∗
Raymond H Chan1
Carola-Bibiane Schönlieb2
Angelica I. Aviles-Rivero2

1City University of Hong Kong (CityU)        2University of Cambridge
affiliations

[Paper]
This is a website made for Continuous U-Net: Faster, Greater and Noiseless.

Abstract

Image segmentation is a fundamental task in image analysis and clinical practice. The current state-of-the-art techniques are based on U-shape type encoder-decoder networks with skip connections, called U-Net. Despite the powerful performance reported by existing U-Net type networks, they suffer from several major limitations. Issues include the hard coding of the receptive field size, compromising the performance and computational cost, as well as the fact that they do not account for inherent noise in the data. They have problems associated with discrete layers, and do not offer any theoretical underpinning. In this work we introduce continuous U-Net, a novel family of networks for image segmentation. Firstly, continuous U-Net is a continuous deep neural network that introduces new dynamic blocks modelled by second order ordinary differential equations. Secondly, we provide theoretical guarantees for our network demonstrating faster convergence, higher robustness and less sensitivity to noise. Thirdly, we derive qualitative measures to tailor-made segmentation tasks. We demonstrate, through extensive numerical and visual results, that our model outperforms existing U-Net blocks for several medical image segmentation benchmarking datasets.


Network


Visual comparison of our continuous U-Net vs. UNet (and variants). The zoom-in views display the difference between discrete blocks in U-Net and our proposed dynamic blocks.


Properties

Overview of properties of our continuous U-Net vs. existing U-type networks.


Paper and Supplementary Material


Chun-Wun Cheng*, Christina Runkel*, Lihao Liu*, Raymond H Chan, Carola-Bibiane Schonlieb, Angelica I Aviles-Rivero.
Continuous U-Net: Faster, Greater and Noiseless
(hosted on ArXiv)


[Bibtex]


Acknowledgements

CWC acknowledges support from Department of Mathematics, College of Science , CityU and HKASR reaching out award. AIAR acknowledges support from CMIH and CCIMI, University of Cambridge. CBS acknowledges support from the Philip Leverhulme Prize, the Royal Society Wolfson Fellowship, the EPSRC advanced career fellowship EP/V029428/1, EPSRC grants EP/S026045/1 and EP/T003553/1, EP/N014588/1, EP/T017961/1, the Wellcome Innovator Awards 215733/Z/19/Z and 221633/Z/20/Z, the European Union Horizon 2020 research and innovation programme under the Marie Skodowska-Curie grant agreement No. 777826 NoMADS, the Cantab Capital Institute for the Mathematics of Information and the Alan Turing Institute.