In this paper, we introduce a set of methods for processing and analyzing long time series of 3D images representing embryo evolution. The images are obtained by in vivo scanning using a confocal microscope where one of the channels represents the cell nuclei and the other one the cell membranes. Our image processing chain consists of three steps: image filtering, object counting (center detection) and segmentation. The corresponding methods are based on numerical solution of nonlinear PDEs, namely the geodesic mean curvature flow model, flux-based level set center detection and generalized subjective surface equation. All three models have a similar character and therefore can be solved using a common approach. We explain in details our semi-implicit time discretization and finite volume space discretization. This part is concluded by a short description of parallelization of the algorithms. In the part devoted to experiments, we provide the experimental order of convergence of the numerical scheme, the validation of the methods and numerous experiments with the data representing an early developmental stage of a zebrafish embryo.
We present an approach for probabilistic contour prediction within the framework of an object tracking system. We combine level-set methods for image segmentation with optical flow estimations based on probability distribution functions (pdfs) calculated at each image position. Unlike most recent level-set methods that consider exclusively the sign of the level-set function to determine an object and its background, we introduce a novel interpretation of the value of the level-set function that reflects the confidence in the contour. To this end, in a sequence of consecutive images, the contour of an object is transformed according to the optical flow estimation and used as the initial object hypothesis in the following image. The values of the initial level-set function are set according to the optical flow pdfs and thus provide an opportunity to incorporate the uncertainties of the optical flow estimation in the object contour prediction.
This paper presents a segmentation technique to handwritten word recognition. This technique implements an algorithm based on an analytical approach. It uses a letter sweeping procedure with a step equal to the Euclidean distance between an established reference index and the entity (the alphabet letter). Then a dissociation of this entity is achieved when this distance will reach a rate of 80%. Our experience about this segmentation technique gives a rate of 81.05% of recognition. A neural multi-layer perceptron classifier confirms the extracted segment. This procedure is successively repeated from the beginning until the end of the word. A concatenation technique is finally used to the word reconstitution.
The problem of change detection in nonstatioriary time series using
linear regression models is addressed. It is assumed that the data can by accurately described by a linear regression model with piece-wise constant parameters. Due to the limitations of some classical approaches, based upon the innovation of one autoregressive (AR) model, most algorithms for the change detection presented make use of two AR models: one is a reference model, and the other one is a current model updated via a sliding block. Changes are detected when a suitable “distance” between these two models is high. Three “distance” measures are considered in the paper: cepstral distance, log-likelihood ratio (justified by GLR) and a distance involving the cross-entropy of the two conditional probabilities laws (divergence test). Other methods based on the quadratic forms of Gaussian random variables are also discussed in the paper. Finally, a change detection algorithm using three models and the evolution of Akaike Information Criterion is presented. All the presented algorithms constituted the object of evaluation by multiple simulation and háve been used to change detection in some nonstationary financial and economic time series.
The paper presents a new approach for a machine vibration analysis and health monitoring combining blind source separation (BSS) and change detection in source signals. So, the problem is translated from the space of the measurements to the space of independent sources, where the reduced number of components simplifies the monitoring problem and where the change detection methods are applied for scalar signals. The approach has been tested in simulation and the assessment on a real machine is presented in the last part of the paper.
This package provides an evaluation framework, training and test data for semi-automatic recognition of sections of historical diplomatic manuscripts. The data collection consists of 57 Latin charters issued by the Royal Chancellery of 7 different types. Documents were created in the era of John the Blind, King of Bohemia (1310–1346) and Count of Luxembourg. Manuscripts were digitized, transcribed, and typical sections of medieval charters ('corroboratio', 'datatio', 'dispositio', 'inscriptio', 'intitulatio', 'narratio', and 'publicatio') were manually tagged. Manuscripts also contain additional metadata, such as manually marked named entities and short Czech abstracts.
Recognition models are first trained using manually marked sections in training documents and the trained model can then be used for recognition of the sections in the test data. The parsing script supports methods based on Cosine Distance, TF-IDF weighting and adapted Viterbi algorithm.
The pulse-coupled neural network (PCNN) is a neural network that has the ability to extract edges, image segments and texture information from images. Only a few changes to the PCNN parameters are necessary to effective operating on different types of data. This is an advantage over the published image segmentation algorithms which generally require information about the target before they are effective.
This paper introduces the PCNN algorithm to provide an accurate segmentation of potential masses in mammogram images to assist radiologists in making their decisions. The fuzzy histogram hyperbolization algorithm is first applied to increase the contrast of the mammogram image before reasonable segmentation. It is followed by the PCNN algorithm to extract the region of interest to arrive at the final result. To test the effectiveness of the introduces algorithm on high quality images, a set of mammogram images was chosen and obtained from the Digital Databases for Mammography Image Analysis Society (MIAS). Four measures of quantifying enhancement have been adapted in this work. Each measure is based on the statistical information obtained from the labeled region of interest and a border area surrounding it. A comparison with the fuzzy c-mean clustering algorithm has been made.
Universal Segmentations (UniSegments) is a collection of lexical resources capturing morphological segmentations harmonised into a cross-linguistically consistent annotation scheme for many languages. The annotation scheme consists of simple tab-separated columns that stores a word and its morphological segmentations, including pieces of information about the word and the segmented units, e.g., part-of-speech categories, type of morphs/morphemes etc. The current public version of the collection contains 38 harmonised segmentation datasets covering 30 different languages.