In this article we use a combination of neural networks with other techniques for the analysis of orthophotos. Our goal is to obtain results that can serve as a useful groundwork for interactive exploration of the terrain in detail. In our approach we split an aerial photo into a regular grid of segments and for each segment we detect a set of features. These features depict the segment from the viewpoint of a general image analysis (color, tint, etc.) as well as from the viewpoint of the shapes in the segment. We perform clustering based on the Formal Concept Analysis (FCA) and Non-negative Matrix Factorization (NMF) methods and project the results using effective visualization techniques back to the aerial photo. The FCA as a tool allows users to be involved in the exploration of particular clusters by navigation in the space of clusters. In this article we also present two of our own computer systems that support the process of the validation of extracted features using a neural network and also the process of navigation in clusters. Despite the fact that in our approach we use only general properties of images, the results of our experiments demonstrate the usefulness of our approach and the potential for further development.
The Self-Organizing Map model considers the possibility of 1D and 3D map topologies. However, 2D maps are by far the most used in practice. Moreover, there is a lack of a theory which studies the relative merits of 1D, 2D and 3D maps. In this paper a theory of this kind is developed, which can be used to assess which topologies are better suited for vector quantization. In addition to this, a broad set of experiments is presented which includes unsupervised clustering with machine learning datasets and color image segmentation. Statistical significance tests show that the 1D maps perform significantly better in many cases, which agrees with the theoretical study. This opens the way for other applications of the less popular variants of the self-organizing map.
Never before in history data has been generated at such high volumes as it is today. It is estimated that every year about 1 Exabyte (= 1 Million Terabyte) of data are generated, of which a large portion is available in digital form. Exploring and analyzing the vast volumes of data becomes increasingly difficult. This paper describes system Vitamin-S that aims to help when analyzing very large data sets.