Accuracy alone can be deceptive when evaluating the performance of a classifier, especially if the problem involves a high number of classes. This paper proposes an approach used for dealing with multi-class problems, which tries to avoid this issue. The approach is based on the Extreme Learning Machine (ELM) classifier, which is trained by using a Differential Evolution (DE) algorithm. Two error measures (Accuracy, $C$, and Sensitivity, S) are combined and applied as a fitness function for the algorithm. The proposed approach is able to obtain multi-class classifiers with a high classification rate level in the global dataset with an acceptable level of accuracy for each class. This methodology is evaluated over seven benchmark classification problems and one real problem, obtaining promising results.
Car manufacturers define proprietary protocols to be used inside their vehicular networks, which are kept an industrial secret, therefore impeding independent researchers from extracting information from these networks. This article describes a statistical and a neural network approach that allows reverse engineering proprietary controller area network (CAN)-protocols assuming they were designed using the data base CAN (DBC) file format. The proposed algorithms are tested with CAN traces taken from a real car. We show that our approaches can correctly reverse engineer CAN messages in an automated manner.
In this study, applications of well-known neural networks such as artificial neural network (ANN), adaptive neuro-fuzzy inference system (ANFIS) and support vector machine (SVM) for wheat grain classification into three species are comparatively presented. The species of wheat grains which are Kama (#70), Rosa (#70) and Canadian (#70) are designated as outputs of neural network models. The classification is carried out through data of wheat grains (#210) acquired using X-ray technique. The data set includes seven grain's geometric parameters: Area, perimeter, compactness, length, width, asymmetry coefficient and groove length. The neural networks input with the geometric parameters are trained through 189 wheat grain data and their accuracies are tested via 21 data. The performance of neural network models is compared to each other with regard to their accuracy, efficiency and convenience. For testing data, the ANN, ANFIS and SVM models numerically calculate the outputs with mean absolute error (MAE) of 0.014, 0.018 and 0.135, and classify the grains with accuracy of 100 %, 100% and 95.23 %, respectively. Furthermore, data of 210 grains is synthetically increased to 3210 in order to investigate the proposed models under big data. It is seen that the models are more successful if the size of data is increased, as well. These results point out that the neural networks can be successfully applied to classification of agricultural grains whether they are properly modelled and trained.
Decrease of attention and an eventual microsleep of an artificial system operator is very dangerous and its early detection can prevent great losses. This chapter deals with a classification of states of vigilance based on analysis of an electroencefalographic activity of the brain. Preprocessing of data is done by the discrete Fourier transform. For the recognition radial basis functions (RBF), learning vector quantization (LVQ), multi-layer perceptron networks, k-nearest neighbor and a method based on Bayesian theory are used. Coefficients of bayes classifier are found using the maximum likelihood estimation. The experiments deal with analysis of human vigilance while their eyes are open. Then the reaction on visual stimuli is investigated. For this experiment 10 volunteers were repeatedly measured. The chapter shows that it is possible to classify vigilance in such conditions.
Combining pattern recognition is a promising direction in designing effective classifiers. There are several approaches to collective decision-making, including quite popular voting methods where the decision is a combination of individual classifiers' outputs. The article focuses on the problem of fuser design which uses discriminants of individual classifiers to make a decision. We present taxonomy of proposed fusers and discuss some of their properties. We focus on the fuser which uses weights dependent on classifier and class number, because of a pretty low computational cost of its training. We formulate the problem of fuser learning as an optimization task and propose a solver which has its origin in neural computations. The quality of proposed learning algorithm was evaluated on the basis of several computer experiments, which were carried out on five benchmark datasets and their results confirm the quality of proposed concept.
Introduction: The dataset of 826 patients who were suspected of the prostate cancer was examined. The best single marker and the combination of markers which could predict the prostate cancer in very early stage of the disease were looked for. Methods: For combination of markers the logistic regression, the multilayer perceptron neural network and the k-nearest neighbour method were used. 10 models for each method were developed on the training data set and the predictive accuracy verified on the test data set. Results and conclusions: The ROCs for the models were constructed and AUCs were estimated. All three examined methods have given comparable results. The medians of estimates of AUCs were 0.775, which were larger than AUC of the best single marker.
This submission contains trained end-to-end models for the Neural Monkey toolkit for Czech and English, solving three NLP tasks: machine translation, image captioning, and sentiment analysis.
The models are trained on standard datasets and achieve state-of-the-art or near state-of-the-art performance in the tasks.
The models are described in the accompanying paper.
The same models can also be invoked via the online demo: https://ufal.mff.cuni.cz/grants/lsd
There are several separate ZIP archives here, each containing one model solving one of the tasks for one language.
To use a model, you first need to install Neural Monkey: https://github.com/ufal/neuralmonkey
To ensure correct functioning of the model, please use the exact version of Neural Monkey specified by the commit hash stored in the 'git_commit' file in the model directory.
Each model directory contains a 'run.ini' Neural Monkey configuration file, to be used to run the model. See the Neural Monkey documentation to learn how to do that (you may need to update some paths to correspond to your filesystem organization).
The 'experiment.ini' file, which was used to train the model, is also included.
Then there are files containing the model itself, files containing the input and output vocabularies, etc.
For the sentiment analyzers, you should tokenize your input data using the Moses tokenizer: https://pypi.org/project/mosestokenizer/
For the machine translation, you do not need to tokenize the data, as this is done by the model.
For image captioning, you need to:
- download a trained ResNet: http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz
- clone the git repository with TensorFlow models: https://github.com/tensorflow/models
- preprocess the input images with the Neural Monkey 'scripts/imagenet_features.py' script (https://github.com/ufal/neuralmonkey/blob/master/scripts/imagenet_features.py) -- you need to specify the path to ResNet and to the TensorFlow models to this script
Feel free to contact the authors of this submission in case you run into problems!
This submission contains trained end-to-end models for the Neural Monkey toolkit for Czech and English, solving four NLP tasks: machine translation, image captioning, sentiment analysis, and summarization.
The models are trained on standard datasets and achieve state-of-the-art or near state-of-the-art performance in the tasks.
The models are described in the accompanying paper.
The same models can also be invoked via the online demo: https://ufal.mff.cuni.cz/grants/lsd
In addition to the models presented in the referenced paper (developed and published in 2018), we include models for automatic news summarization for Czech and English developed in 2019. The Czech models were trained using the SumeCzech dataset (https://www.aclweb.org/anthology/L18-1551.pdf), the English models were trained using the CNN-Daily Mail corpus (https://arxiv.org/pdf/1704.04368.pdf) using the standard recurrent sequence-to-sequence architecture.
There are several separate ZIP archives here, each containing one model solving one of the tasks for one language.
To use a model, you first need to install Neural Monkey: https://github.com/ufal/neuralmonkey
To ensure correct functioning of the model, please use the exact version of Neural Monkey specified by the commit hash stored in the 'git_commit' file in the model directory.
Each model directory contains a 'run.ini' Neural Monkey configuration file, to be used to run the model. See the Neural Monkey documentation to learn how to do that (you may need to update some paths to correspond to your filesystem organization).
The 'experiment.ini' file, which was used to train the model, is also included.
Then there are files containing the model itself, files containing the input and output vocabularies, etc.
For the sentiment analyzers, you should tokenize your input data using the Moses tokenizer: https://pypi.org/project/mosestokenizer/
For the machine translation, you do not need to tokenize the data, as this is done by the model.
For image captioning, you need to:
- download a trained ResNet: http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz
- clone the git repository with TensorFlow models: https://github.com/tensorflow/models
- preprocess the input images with the Neural Monkey 'scripts/imagenet_features.py' script (https://github.com/ufal/neuralmonkey/blob/master/scripts/imagenet_features.py) -- you need to specify the path to ResNet and to the TensorFlow models to this script
The summarization models require input that is tokenized with Moses Tokenizer (https://github.com/alvations/sacremoses) and lower-cased.
Feel free to contact the authors of this submission in case you run into problems!
Detection and early prediction of hypnagogium based on the EEG analysis is a very promising way how to deal with different states of vigilance. We are dealing with the EEG signal using different methodology mainly based on the spectral analysis such as Fourier transform, autoregressive models and also different kinds of filters. For the detection of hypnagogium we are using methods such as bayes classifier, nearest-neighbor and methods of neural networks. We are performing the analysis of EEG to recognize and classify the hypnagogium.
Exchange rate forecasting is an important and challenging task for both academic researchers and business practitioners. Several statistical or artificial intelligence approaches have been applied to forecasting exchange rate. The recent trend to improve the prediction accuracy is to combine individual forecasts in the form of the simple average or weighted average where the weight reflects the inverse of the prediction error. This kind of combination, however, does not reflect the current prediction error more than the relatively old performance. In this paper, we propose a new approach where the forecasting results of GARCH and neural networks are combined based on the weight reflecting the inverse of EWMA of the mean absolute percentage error (MAPE) of each individual prediction model. Empirical study results indicate that the proposed combining method has better accuracy than GARCH, neural networks, and traditional combining methods that utilize the MAPE for the weight.