The paper presents an overview of image analysis activities of the Brno DAR group in the medical application area of retinal imaging. Particularly, illumination correction and SNR enhancement by registered averaging as preprocessing steps are briefly described; further mono- and multimodal registration methods developed for specific types of ophthalmological images, and methods for segmentation of optical disc, retinal vessel tree and autofluorescence areas are presented. Finally, the designed methods for neural fibre layer detection and evaluation on retinal images, utilising different combined texture analysis approaches and several types of classifiers, are shown. The results in all the areas are shortly commented on at the respective sections. In order to emphasise methodological aspects, the methods and results are ordered according to consequential phases of processing rather then divided according to individual medical applications.
Sensors of different wavelengths in remote sensing field capture data. Each and every sensor has its own capabilities and limitations. Synthetic aperture radar (SAR) collects data that has a high spatial and radiometric resolution. The optical remote sensors capture images with good spectral information. Fused images from these sensors will have high information when implemented with a better algorithm resulting in the proper collection of data to predict weather forecasting, soil exploration, and crop classification. This work encompasses a fusion of optical and radar data of Sentinel series satellites using a deep learning-based convolutional neural network (CNN). The three-fold work of the image fusion approach is performed in CNN as layered architecture covering the image transform in the convolutional layer, followed by the activity level measurement in the max pooling layer. Finally, the decision-making is performed in the fully connected layer. The objective of the work is to show that the proposed deep learning-based CNN fusion approach overcomes some of the difficulties in the traditional image fusion approaches. To show the performance of the CNN-based image fusion, a good number of image quality assessment metrics are analyzed. The consequences demonstrate that the integration of spatial and spectral information is numerically evident in the output image and has high robustness. Finally, the objective assessment results outperform the state-of-the-art fusion methodologies.