Articles

Gas Plasma Technology—An Asset to Healthcare During Viral Pandemics Such as the COVID-19 Crisis?

Sander Bekeschus, Axel Kramer, Elisabetta Suffredini,Thomas von Woedtke, and Vittorio Colombo

Total Body PET: Why, How, What for?

Suleman Surti, Austin R. Pantel and Joel S. Karp

Photon Counting CT: Clinical Applications and Future Developments

Scott S. Hsieh, Shuai Leng, Kishore Rajendran, Shengzhen Tao, Cynthia H. McCollough

Magnetic Resonance Fingerprinting: Implications and Opportunities for PET/MR

Kathleen M. Ropella-Panagis, Nicole Seiberlich and Vikas Gulani

Model-Based Deep Learning PET Image Reconstruction Using Forward-Backward Splitting Expectation Maximisation

Abolfazl Mehranian and Andrew J. Reader

All papers published in the journal in the year 3 prior to the year of the award (therefore all papers published in 2017) were eligible for consideration. The primary criteria for the best paper award are the potential impact and interest of the paper for the community as illustrated by the number of downloads and citations. The editorial board of IEEE TRPMS is pleased to announce that the 2020 Best Paper Award winner is:

Based on the same criteria, a complete list of the top ten papers selected by the editorial board of TRPMS amongst all papers published in 2017 are listed can be found here

IEEE TRPMS 2020 Best Paper Award

Review Articles

Advances in Computational Human Phantoms and Their Applications in Biomedical Engineering—A Topical Review

Wolfgang Kainz, Esra Neufeld, Wesley E. Bolch, Christian G. Graff, Chan Hyeong Kim, Niels Kuster, Bryn Lloyd, Tina Morrison, Paul Segars, Yeon Soo Yeom, Maria Zankl, X. George Xu and Benjamin M. W. Tsui

Read More
Deep Learning for PET Image Reconstruction

Andrew J. Reader, Guillaume Corda, Abolfazl Mehranian, Casperda Costa-Luis, Sam Ellis and Julia A. Schnabe

Read More
A Review of Deep Learning-Based Approaches for Attenuation Correction in Positron Emission Tomography

Jae Sung Lee

Read More
Parametric Imaging With PET and SPECT

Jean-Dominique Gallezot, Yihuan Lu, Mika Naganawa and Richard E Carson

Read More

Featured Articles

Total Body PET: Why, How, What for?

Suleman Surti, Austin R. Pantel and Joel S. Karp

Read More
Convolutional Neural Network for Crystal Identification and Gamma Ray Localization in PET

Andy LaBella, Paul Vaska, Wei Zhao and Amir H. Goldan

Read More
Double Scatter Simulation for More Accurate Image Reconstruction in Positron Emission Tomography

Charles C. Watson, Jicun Hu and Chuanyu Zhou

Read More
Monitoring Ion Beam Therapy With a Compton Camera: Simulation Studies of the Clinical Feasibility

M. Fontana, J.-L. Ley, D. Dauvergne, N. Freud, J. Krimmer, J. M. Létang,V.Maxim,M.-H. Richard, I. Rinaldi and É. Testa

Read More

Most Read Articles

PET Image Denoising Using a Deep Neural Network Through Fine Tuning

Kuang Gong, Jiahui Guan, Chih-Chieh Liu, and Jinyi Qi

Read More
Deep-Neural-Network-Based Sinogram Synthesis for Sparse-View CT Image Reconstruction

Hoyeon Lee, Jongha Lee, Hyeongseok Kim, Byungchul Cho, and Seungryong Cho

Read More
Deep Learning-Based Image Segmentation on Multimodal Medical Imaging

Zhe Guo,XiangLi, Heng Huang, Ning Guo, and Quanzheng Li

Read More
A Novel DOI Positioning Algorithm for Monolithic Scintillator Crystals in PET Based on Gradient Tree Boosting

Florian Müller, David Schug, Patrick Hallen, Jan Grahe and Volkmar Schulz

Read More

Abstract:
Over the past decades, significant improvements have been made in the field of computational human phantoms (CHPs) and their applications in biomedical engineering. Their sophistication has dramatically increased. The very first CHPs were composed of simple geometric volumes, e.g., cylinders and spheres, while current CHPs have a high resolution, cover a substantial range of the patient population, have high anatomical accuracy, are poseable, morphable, and are augmented with various details to perform functionalized computations. Advances in imaging techniques and semiautomated segmentation tools allow fast and personalized development of CHPs. These advances open the door to quickly develop personalized CHPs, inherently including the disease of the patient. Because many of these CHPs are increasingly providing data for regulatory submissions of various medical devices, the validity, anatomical accuracy, and availability to cover the entire patient population is of utmost importance. This paper is organized into two main sections: the first section reviews the different modeling techniques used to create CHPs, whereas the second section discusses various applications of CHPs in biomedical engineering. Each topic gives an overview, a brief history, recent developments, and an outlook into the future.

Abstract:
This article reviews the use of a sub-discipline of artificial intelligence (AI), deep learning, for the reconstruction of images in positron emission tomography (PET). Deep learning can be used either directly or as a component of conventional reconstruction, in order to reconstruct images from noisy PET data. The review starts with an overview of conventional PET image reconstruction and then covers the principles of general linear and convolution-based mappings from data to images, and proceeds to consider non-linearities, as used in convolutional neural networks (CNNs). Direct deep-learning methodology is then reviewed in the context of PET reconstruction. Direct methods learn the imaging physics and statistics from scratch, not relying on a priori knowledge of these models of the data. In contrast, model-based or physics-informed deep-learning uses existing advances in PET image reconstruction, replacing conventional components with deep-learning data-driven alternatives, such as for the regularisation. These methods use trusted models of the imaging physics and noise distribution, while relying on training data examples to learn deep mappings for regularisation and resolution recovery. After reviewing the main examples of these approaches in the literature, the review finishes with a brief look ahead to future directions.

Abstract:
Attenuation correction (AC) is essential for the generation of artifact-free and quantitatively accurate positron emission tomography (PET) images. PET AC based on computed tomography (CT) frequently results in artifacts in attenuationcorrected PET images, and these artifacts mainly originate from CT artifacts and PET-CT mismatches. The AC in PET combined with a magnetic resonance imaging (MRI) scanner (PET/MRI) is more complex than PET/CT, given that MR images do not provide direct information on high energy photon attenuation. Deep learning (DL)-based methods for the improvement of PET AC have received significant research attention as alternatives to conventional AC methods. Many DL studies were focused on the transformation of MR images into synthetic pseudo-CT or attenuation maps. Alternative approaches that are not dependent on the anatomical images (CT or MRI) can overcome the limitations related to current CT-and MRI-based ACs and allow for more accurate PET quantification in stand-alone PET scanners for the realization of low radiation doses. In this article, a review is presented on the limitations of the PET AC in current dual-modality PET/CT and PET/MRI scanners, in addition to the current status and progress of DL-based approaches, for the realization of improved performance of PET AC.

Abstract:
In molecular imaging modalities such as positron emission tomography (PET) or single photon emission computed tomography (SPECT), parametric imaging is the process of creating fully quantitative 3-D maps of pharmacokinetic parameters from a dynamic series of radiotracer concentration images. An overview of the pharmacokinetic parameters that have been assessed in PET or SPECT studies, and the kinetic models that have been proposed to compute parametric images is presented. Parametric imaging is challenging due to the high level of noise in the raw images obtained from the scanner, and the additional needs to obtain the input function of the kinetic model and to correct for subject motion during the usually long dynamic scans. Methods that have been proposed to assess each of these challenges are been reviewed.

Abstract:
PET instruments are now available with a long axialfield-of-view (LAFOV) to enable imaging the total-body, or atleast head and torso, simultaneously and without bed translation. This has two major benefits, a dramatic increase in systemsensitivity and the ability to measure kinetics with wider axialcoverage so as to include multiple organs. This article presents a review of the technology leading up to the introduction ofthese new instruments, and explains the benefits of an LAFOVPET-CT instrument. To date there are two platforms developed for total-body PET (TB-PET), an outcome of the EXPLORER Consortium of the University of California at Davis (UC Davis) and the University of Pennsylvania (Penn). The uEXPLORERat UC Davis has an AFOV of 194 cm and was developed byUnited Imaging Healthcare. The PennPET EXPLORER was developed at Penn and is based on the digital detector fromPhilips Healthcare. This multiring system is scalable and hasbeen tested with 3 rings but is now being expanded to 6 ringsfor 140 cm. Initial human studies with both EXPLORER systems have demonstrated the successful implementation and benefitsof LAFOV scanners for both clinical and research applications.Examples of such studies are described in this article.

Abstract:
Spatial resolution in positron emission tomography using traditional pixelated block detectors is inherently limited by the size of the detector array elements, namely, the scintillator crystals and readout pixels. Conventional centroiding algorithms based on Anger logic are widely used to localize individual events down to the scintillator level. However, these algorithms are associated with well-known performance degradation along the edges and corners of detector arrays. In this article, we explore the use of convolutional neural networks (CNNs) for 3D gamma ray localization as a computationally inexpensive alternative to classical centroiding. The method is successfully implemented on Monte Carlo simulated data from a single-ended readout depth-encoding detector array consisting of LYSO:Ce scintillator crystals coupled 4-to-1 to silicon photomultiplier (SiPM) pixels. The CNN demonstrated higher crystal identification accuracy at the edges and corners (99.0% versus 49.2%) and lower spatial error compared to classical centroiding (0.38 mm versus 0.76 mm). In addition, the CNN achieved 2.75 mm FWHM depth-of-interaction (DOI) resolution. Preliminary qualitative results for how our approach translates to experimental data after training on simulated data are also presented. Future work on the CNN-based approach with more experimental data could improve the performance of block detectors with multicrystal scintillators and possibly achieve subscintillator spatial resolution.

Abstract:
Quantitative reconstruction algorithms for positron emission tomography (PET) require estimating the scattered annihilation radiation contributions to the measured data. This is commonly done by simulating only the single scatter contribution, then scaling this component to the data to account for multiple scatter. This scaling step is sometimes problematic due to inconsistencies and statistical noise in these data. Monte Carlo (MC) simulations suggest that for modern scanners with good energy resolution and a narrow photopeak energy window, multiple scatter is dominated by double scatter, so that a single plus double scatter simulation could account for all but a few percent of the scatter arising from within the field of view (FOV) of the simulation. Consequently, we have extended our single scatter simulation (SSS) algorithm to include double scatter contributions. These are efficiently computed by considering a subset of pairs of the single scatter points. This simulation discriminates the time-of-flight offsets of the scattered radiation as well. By fully accounting for the physics, an absolute scaling is achieved such that no scaling relative to measured data is required to model scatter from within the FOV. The double scatter simulation (DSS) results agree well with independent MC simulations. Computation time for SSS+DSS increases by a small multiple of the time required for SSS only, but remains clinically viable. Results for simulated and measured phantom and human PET studies are presented.

Abstract:
As more and more particle therapy centers are being built world-wide, there is increasing need in treatment monitoring methods, ideally in real time. This article investigates the clinical applicability of a Compton camera design by means of Monte Carlo simulations. The Compton camera performance has been studied with the simulation of point-like source and beam irradiation of a polymethyl methacrylate (PMMA) phantom. The system absolute photon detection efficiency, measured via source irradiation, varies in the range [1, 4] ×10 -4 with energy variations in the prompt gamma (PG) energy range and source position shifts with respect to the center of the camera. With proton and carbon beams impinging on a PMMA cylindrical phantom, the number of detected PG coincidences related to various beam time structure has been studied. Finally, the accuracy of the camera in identifying the dose profile fall-off position has been estimated and two different event reconstruction methods have been compared for this purpose; one based on analytical calculation with a line-cone technique, the second relying on an iterative maximum likelihood expectation maximization algorithm. Both methods showed the possibility to reconstruct the beam depth-dose profile and to retrieve the dose fall-off with millimeter precision on a spot basis.

Abstract:
Positron emission tomography (PET) is a functional imaging modality widely used in clinical diagnosis. In this paper, we trained a deep convolutional neural network to improve PET image quality. Perceptual loss based on features derived from a pretrained VGG network, instead of the conventional mean squared error, was employed as the training loss function to preserve image details. As the number of real patient data set for training is limited, we propose to pretrain the network using simulation data and fine-tune the last few layers of the network using real data sets. Results from simulation, real brain, and lung data sets show that the proposed method is more effective in removing noise than the traditional Gaussian filtering method.

Abstract:
Recently, a number of approaches to low-dose computed tomography (CT) have been developed and deployed in commercialized CT scanners. Tube current reduction is perhaps the most actively explored technology with advanced image reconstruction algorithms. Sparse data sampling is another viable option to the low-dose CT, and sparse-view CT has been particularly of interest among the researchers in CT community. Since analytic image reconstruction algorithms would lead to severe image artifacts, various iterative algorithms have been developed for reconstructing images from sparsely view-sampled projection data. However, iterative algorithms take much longer computation time than the analytic algorithms, and images are usually prone to different types of image artifacts that heavily depend on the reconstruction parameters. Interpolation methods have also been utilized to fill the missing data in the sinogram of sparse-view CT thus providing synthetically full data for analytic image reconstruction. In this paper, we introduce a deep-neural-network-enabled sinogram synthesis method for sparse-view CT, and show its outperformance to the existing interpolation methods and also to the iterative image reconstruction approach.

Abstract:
Multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multimodal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multimodal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep convolutional neural networks to contour the lesions of soft tissue sarcomas using multimodal images, including those from magnetic resonance imaging, computed tomography, and positron emission tomography. The network trained with multimodal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e., fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e., voting). This paper provides empirical guidance for the design and application of multimodal image analysis.

Abstract:
Monolithic crystals are examined as an alternative to segmented scintillator arrays in positron emission tomography (PET). Monoliths provide good energy, timing, and spatial resolution including intrinsic depth of interaction (DOI) encoding. DOI allows reducing parallax errors (radial astigmatism) at off-center positions within a PET ring. We present a novel DOI-estimation approach based on the supervised machine learning algorithm gradient tree boosting (GTB). GTB builds predictive regression models based on sequential binary comparisons (decision trees). GTB models have been shown to be implementable in FPGA if the memory requirement fits the available resources. We propose two optimization scenarios for the best possible positioning performance: One restricting the available memory to enable a future FPGA implementation and one without any restrictions. The positioning performance of the GTB models is compared with a DOI estimation method based on a single DOI observable (SO) comparable to other methods presented in literature. For a 12 mm high monolith, we achieve an averaged spatial resolution of 2.15 mm and 2.12 mm FWHM for SO and GTB models, respectively. In contrast to SO models, GTB models show a nearly uniform positioning performance over the whole crystal depth.