Current ultrasound image synthesis techniques often fall short in semantic accuracy and physical realism or produce images with a significant domain gap. Ultra-NeRF addresses these issues by creating a Neural Field from reconstructed acoustic properties via pose-annotated B-mode images and shows that it can be used for novel view synthesis of B-mode images. While Ultra-NeRF generates plausible results, it lacks explainability in the acoustic parameter space. In this paper, we revisit neural fields for ultrasound and introduce the Sonographic Neural Reflection Field (SuRF), which adheres to the physical properties of acoustic ultrasound. By redesigning Ultra-NeRF’s differentiable forward synthesis model and incorporating physics-inspired regularizations, we ensure the interpretability of learned acoustic parameters. Tested on the Ultra-NeRF in-silico dataset and a new multi-view ex-vivo 3D ultrasound dataset, our method demonstrates enhanced reconstruction and interpretation across various tissue types, including fat, muscle, and bone.
inproceedings
BibTeXKey: WAT+24