Articles

Deep-Neural-Network-Based Sinogram Synthesis for Sparse-View CT Image Reconstruction

Hoyeon Lee, Jongha Lee, Hyeongseok Kim, Byungchul Cho, and Seungryong Cho

PET Image Denoising Using a Deep Neural Network Through Fine Tuning

Kuang Gong , Jiahui Guan, Chih-Chieh Liu, and Jinyi Qi

Use of Generative Disease Models for Analysis and Selection of Radiomic Features in PET

Ivan S. Klyuzhin , Jessie F. Fu, Nikolay Shenkov, Arman Rahmim, and Vesna Sossi

Advances in Computational Human Phantoms and Their Applications in Biomedical Engineering—A Topical Review

Wolfgang Kainz, Esra Neufeld, Wesley E. Bolch, Christian G. Graff, Chan Hyeong Kim, Niels Kuster, Bryn Lloyd, Tina Morrison, Paul Segars, Yeon Soo Yeom, Maria Zankl, X. George Xu and Benjamin M. W. Tsui

Featured Articles

Machine (Deep) Learning Methods for Image Processing and Radiomics

Mathieu Hatt, Chintan Parmar, Jinyi Qi and Issam El Naqa

Read More
Deep Learning-Based Image Segmentation on Multimodal Medical Imaging

Zhe Guo , Xiang Li , Heng Huang, Ning Guo, and Quanzheng Li

Read More
Creating Robust Predictive Radiomic Models for Data From Independent Institutions Using Normalization

Avishek Chatterjee, Martin Vallières, Anthony Dohan, Ives R. Levesque, Yoshiko Ueno, Sameh Saif, Caroline Reinhold, and Jan Seuntjens

Read More
Artificial Neural Network With Composite Architectures for Prediction of Local Control in Radiotherapy

Sunan Cui, Yi Luo, Huan-Hsin Tseng, Randall K. Ten Haken, and Issam El Naqa

Read More

Review Articles

Advances in Computational Human Phantoms and Their Applications in Biomedical Engineering—A Topical Review

Wolfgang Kainz, Esra Neufeld, Wesley E. Bolch, Christian G. Graff, Chan Hyeong Kim, Niels Kuster, Bryn Lloyd, Tina Morrison, Paul Segars, Yeon Soo Yeom, Maria Zankl, X. George Xu and Benjamin M. W. Tsui

Read More
3-D Image-Based Dosimetry in Radionuclide Therapy

M. Ljungberg and K. Sjögreen Gleisner

Read More
Mechanisms of Plasma Medicine: Coupling Plasma Physics, Biochemistry, and Biology

David B. Graves

Read More
Organ-Dedicated Molecular Imaging Systems

Antonio J. González, Filomeno Sánchez, José M. Benlloch

Read More

Most Read Articles

Pushing the Limits in Time-of-Flight PET Imaging

P. Lecoq

Read More
Performance Study of a Large Monolithic LYSO PET Detector With Accurate Photon DOI Using Retroreflector Layers

Andrea González-Montoro, Albert Aguilar, Gabriel Cañizares, Pablo Conde, Liczandro Hernández, Luis F. Vidal, Matteo Galasso, Andrea Fabbri, Filomeno Sánchez, José M. Benlloch, and Antonio J. González

Read More
Low Power and Small Area, 6.9 ps RMS Time-to-Digital Converter for 3-D Digital SiPM

Nicolas Roy, Frédéric Nolet, Frédérik Dubois, Marc-Olivier Mercier, Réjean Fontaine and Jean-François Pratte

Read More
Gradient Tree Boosting-Based Positioning Method for Monolithic Scintillator Crystals in Positron Emission Tomography

Florian Müller, David Schug, Patrick Hallen, Jan Grahe and Volkmar Schulz

Read More

Abstract:
Methods from the field of machine (deep) learning have been successful in tackling a number of tasks in medical imaging, from image reconstruction or processing to predictive modeling, clinical planning and decision-aid systems. The ever growing availability of data and the improving ability of algorithms to learn from them has led to the rise of methods based on neural networks to address most of these tasks with higher efficiency and often superior performance than previous, “shallow” machine learning methods. The present editorial aims at contextualizing within this framework the recent developments of these techniques, including these described in the papers published in the present special issue on machine (deep) learning for image processing and radiomics in radiation-based medical sciences.

Abstract:
Multimodality medical imaging techniques have been increasingly applied in clinical practice and research studies. Corresponding multimodal image analysis and ensemble learning schemes have seen rapid growth and bring unique value to medical applications. Motivated by the recent success of applying deep learning methods to medical image processing, we first propose an algorithmic architecture for supervised multimodal image analysis with cross-modality fusion at the feature learning level, classifier level, and decision-making level. We then design and implement an image segmentation system based on deep convolutional neural networks to contour the lesions of soft tissue sarcomas using multimodal images, including those from magnetic resonance imaging, computed tomography, and positron emission tomography. The network trained with multimodal images shows superior performance compared to networks trained with single-modal images. For the task of tumor segmentation, performing image fusion within the network (i.e., fusing at convolutional or fully connected layers) is generally better than fusing images at the network output (i.e., voting). This paper provides empirical guidance for the design and application of multimodal image analysis.

Abstract:
The distribution of a radiomic feature can differ between two institutions due to, for example, different image acquisition parameters, imaging systems, and contouring (i.e., tumor delineation) variations between clinicians. We aimed to develop effective statistical methods to successfully apply a radiomics-based predictive model to an external dataset. Theory: Two common feature normalization methods, rescaling and standardization, were evaluated for suitability in reducing feature variability between institutions. Standardization was chosen as the preferred approach, since rescaling was more sensitive to statistical outliers, and potentially reduced the discrimination power of a feature. It was also demonstrated why a dataset needs to be balanced between positive and negative outcomes before standardization is applied to it. Methods: In this paper, the novelty and power of the developed method for improved application of radiomics models on external datasets is tied to finding the normalization transformations separately for each independent set. The clinical effectiveness of the normalization method was shown using magnetic resonance images of primary uterine adenocarcinoma. Feature selection was done using 94 samples (Institution X), and feature testing was done using 63 samples (Institution Y). The outcomes studied were lymphovascular space invasion and cancer staging. Logistic regression was used to obtain the prediction accuracy of a feature. Promising radiomic features were defined as those with AUC > 0.75 in the training set. Results: When comparing the prediction accuracy, ${F}$ -score, and Matthews correlation coefficient (MCC) of promising radiomic features in the testing set with and without standardization, there was an improvement due to standardization. For cancer stage prediction, average accuracy for all promising features rose from 0.64 to 0.72, average ${F}$ -score from 0.48 to 0.71, and average MCC from 0.34 to 0.44 ( ${p}\,\, {< } {10}^{ {-5}}$ ). Furthermore, when applying standardization, the ratio of sensitivity to specificity was close to unity in the testing set, comparable to the ratio in the training set. Without standardization, this ratio deviated significantly from unity in the testing set. Conclusions: Applying feature standardization separately for each independent set using imbalance adjustments was shown to improve the predictive ability of radiomic models when applied to a dataset from an external institution.

Abstract:
In this paper, we investigated the application of artificial neural networks with composite architectures into the prediction of local control (LC) of lung cancer patients after radiotherapy. The motivation of this paper was to take advantage of the temporal associations among longitudinal (sequential) data to improve the predictive performance of outcome models under the circumstance of limited sample sizes. Two composite architectures: 1) a 1-D convolutional + fully connected and 2) a locally connected + fully connected architectures were implemented for this purpose. Compared with the fully connected architecture [multilayer perceptron (MLP)], our composite architectures yielded better predictive performance of LC in lung cancer patients who received radiotherapy. Specifically, in a cohort of 98 patients (29 patients failed locally), the composite architecture of 1-D convolutional layers and fully connected layers achieved an area under receiver operating characteristic curve (AUC) of 0.83 [95% confidence interval (CI): 0.807–0.841] with 18 features (14 features are longitudinal data). Whereas, the composite architecture of locally connected layers and fully connected layers achieved an AUC of 0.80 (95% CI: 0.775–0.811). Both outperformed an MLP in the prediction performance with the same set of features, which achieved an AUC of 0.78 (95% CI: 0.751–0.790); ( P -values for differences in AUC using the DeLong tests were 1.609×10−14 and 1.407×10−4 , respectively).

Abstract:
Over the past decades, significant improvements have been made in the field of computational human phantoms (CHPs) and their applications in biomedical engineering. Their sophistication has dramatically increased. The very first CHPs were composed of simple geometric volumes, e.g., cylinders and spheres, while current CHPs have a high resolution, cover a substantial range of the patient population, have high anatomical accuracy, are poseable, morphable, and are augmented with various details to perform functionalized computations. Advances in imaging techniques and semiautomated segmentation tools allow fast and personalized development of CHPs. These advances open the door to quickly develop personalized CHPs, inherently including the disease of the patient. Because many of these CHPs are increasingly providing data for regulatory submissions of various medical devices, the validity, anatomical accuracy, and availability to cover the entire patient population is of utmost importance. This paper is organized into two main sections: the first section reviews the different modeling techniques used to create CHPs, whereas the second section discusses various applications of CHPs in biomedical engineering. Each topic gives an overview, a brief history, recent developments, and an outlook into the future.

Abstract:
Radionuclide therapy is the use of radioactive drugs for internal radiotherapy, mainly for the treatment of metastatic disease. As opposed to systemic cancer therapies in general, the use of radioactively labeled drugs results not only in a targeted therapy but also the possibility of imaging the distribution of the drug during therapy. From such images, the absorbed doses delivered to tumors and organs at risk can be calculated. Calculation of the absorbed dose from 3-D images such as single-photon emission computed tomography (SPECT)/CT, and in some cases positron emission tomography (PET)/CT, relies on image-based activity quantification. Quantification is accomplished by modeling the physics involved in the image-formation process, and applying image-processing methods. From a time-sequence of such quantitative images, the absorbed doses are then calculated. Although individual-patient dosimetry is a standard component of other forms of radiotherapy, it is still overlooked in the majority of radionuclide therapies. In this review, we summarize the physical and technical problems that need to be addressed in image-based dosimetry. The focus is on SPECT, since most of the radionuclides used are single-photon emitters, although the use of PET is also discussed. Practical issues of relevance for the practical implementation of personalized dosimetry in radionuclide therapy are also highlighted

Abstract:
Low temperature plasma (LTP) has emerged in the last decade as a novel and promising therapy for wound and skin decontamination, promotion of wound healing, cancer remission, control of wound-resident multidrug resistant bacteria, and dental and cosmetic applications, among others. Progress has been rapid in developing clinically useful devices and many studies are underway worldwide. Mechanisms of plasma therapeutics are beginning to be understood but much remains to be explored. This review focuses on mechanisms coupling the physics and chemistry of LTPs to medically relevant biochemistry and biology.

Abstract:
In this review, we will cover both clinical and technical aspects of the advantages and disadvantages of organ specific (dedicated) molecular imaging (MI) systems, namely positron emission tomography (PET) and single photon emission computed tomography, including gamma cameras. This review will start with the introduction to the organ-dedicated MI systems. Thereafter, we will describe the differences and their advantages/disadvantages when compared with the standard large size scanners. We will review time evolution of dedicated systems, from first attempts to current scanners, and the ones that ended in clinical use. We will review later the state of the art of these systems for different organs, namely: breast, brain, heart, and prostate. We will also present the advantages offered by these systems as a function of the special application or field, such as in surgery, therapy assistance and assessment, etc. Their technological evolution will be introduced for each organ-based imager. Some of the advantages of dedicated devices are: higher sensitivity by placing the detectors closer to the organ, improved spatial resolution, better image contrast recovery (by reducing the noise from other organs), and also lower cost. Designing a complete ring-shaped dedicated PET scanner is sometimes difficult and limited angle tomography systems are preferable as they have more flexibility in placing the detectors around the body/organ. Examples of these geometries will be presented for breast, prostate and heart imaging. Recently achievable excellent time of flight capabilities below 300-ps full width at half of the maximum reduce significantly the impact of missing angles on the reconstructed images.

Abstract:
There is an increasing demand for high sensitivity multiparametric medical imaging approaches. High precision time-of-flight positron emission tomography (TOFPET) scanners have a very high potential in this context, providing an improvement in the signal-to-noise ratio of the reconstructed image and the possibility to further increase the already very high sensitivity (at the pico-molar level) of PET scanners. If the present state-of-the art coincidence time resolution of about 500 ps can be improved, it will open the way in particular to a significant reduction of the dose injected to the patient, and consequently, to the possibility to extend the use of PET scans to new categories of patients. This paper will describe the systematic approach followed by a number of researchers worldwide to push the limits of TOFPET imaging to the sub-100 ps level. It will be shown that the possibility to reach 10 ps, although extremely challenging, is not limited by physical barriers and that a number of disruptive technologies are presently being investigated at the level of all the components of the detection chain to gain at least a factor of 10 as compared to the present state-of-the-art.

Abstract:
Clinical and organ-dedicated PET systems typically require a high efficiency imposing the use of thick scintillators, normally through crystal arrays. To provide depth of interaction (DOI) information, two or more layers are sometimes mounted in the staggered or phoswich approach. In this paper, we are proposing an alternative using thick and large monolithic crystals. We have tested two surface treatments for a 50 mm × 50 mm × 20 mm LYSO block. We provide data in this paper as close as 5 mm to the lateral walls. We left those walls black painted and the exit face coupled to the photosensor (12 × 12 SiPM array) polished. The entrance face was: 1) black painted or 2) coupled to a retroreflector (RR) layer. These configurations keep a good DOI linearity and, on average, reached 4 mm DOI resolution, measured as the full width at half of the maximum. Approaches using RR layers return constant and good energy resolutions nearing 12%, compared to a range of 15%-16% in the case of totally black painted blocks. The best result concerning the detector spatial resolution was obtained when one of the smallest RR was used (120 um corner cube size), being 1.7 mm at the entrance crystal layer and 0.7 mm in the layer closest to the photosensor. These values worsen at least 30% for the black treatment case.

Abstract:
Time-of-flight measurements are becoming essential to the advancement of several fields, such as preclinical positron emission tomography and high energy physics. Recent developments in single photon avalanche diode (SPAD)-based detectors have spawned a great interest in digital silicon photomultipliers (dSiPMs). To overcome the tradeoff between the photosensitive area and the processing capabilities in current 2-D dSiPM, we propose a novel 3-D digital SiPM, where the SPAD, designed for maximal photosensitive area, will be stacked in 3-D over the electronic circuits, designed in a CMOS node technology. All readout circuits will be implemented directly under the SPAD real estate, including quenching circuit, time-to-digital converter (TDC) and digital readout electronics. This paper focusses on the TDC element of this system, designed in TSMC CMOS 65 nm. This ring oscillator-based Vernier TDC requires only 25 × 50 μm 2 and 160 μW, and achieves 6.9 ps rms timing accuracy.

Abstract:
Monolithic crystals are considered as an alternative for complex segmented scintillator arrays in positron emission tomography systems. Monoliths provide high sensitivity, good timing, and energy resolution while being cheaper than highly segmented arrays. Furthermore, monoliths enable intrinsic depth of interaction capabilities and good spatial resolutions (SRs) mostly based on statistical calibrations. To widely translate monoliths into clinical applications, a time-efficient calibration method and a positioning algorithm implementable in system architecture such as field-programmable gate arrays (FPGAs) are required. We present a novel positioning algorithm based on gradient tree boosting (GTB) and a fast fan beam calibration requiring less than 1 h per detector block. GTB is a supervised machine learning technique building a set of sequential binary decisions (decision trees). The algorithm handles different sets of input features, their combinations and partially missing data. GTB models are strongly adaptable influencing both the positioning performance and the memory requirement of trained positioning models. For an FPGA-implementation, the memory requirement is the limiting aspect. We demonstrate a general optimization and propose two different optimization scenarios: one without compromising on positioning performance and one optimizing the positioning performance for a given memory restriction. For a 12 mm high LYSO-block, we achieve an SR better than 1.4 mm FWHM.