The contribution focuses on the design of a control algorithm aimed at the operative control of runoff water from a reservoir during flood situations. Management is based on the stochastically specified forecast of water inflow into the reservoir. From a mathematical perspective, the solved task presents the control of a dynamic system whose predicted hydrological input (water inflow) is characterised by significant uncertainty. The algorithm uses a combination of simulation model data, in which the position of the bottom outlets is sought via nonlinear optimisation methods, and artificial intelligence methods (adaptation and fuzzy model). The task is written in the technical computing language MATLAB using the Fuzzy Logic Toolbox.
Car manufacturers define proprietary protocols to be used inside their vehicular networks, which are kept an industrial secret, therefore impeding independent researchers from extracting information from these networks. This article describes a statistical and a neural network approach that allows reverse engineering proprietary controller area network (CAN)-protocols assuming they were designed using the data base CAN (DBC) file format. The proposed algorithms are tested with CAN traces taken from a real car. We show that our approaches can correctly reverse engineer CAN messages in an automated manner.
A key physical property used in the description of a soil-water regime is a soil water retention curve, which shows the relationship between the water content and the water potential of the soil. Pedotransfer functions are based on the supposed dependence of the soil water content on the available soil characteristics, e.g., on the relative content of the particle size in the soil and the dry bulk density of the soil. This dependence could be extracted from the available data by various regression methods. In this paper, artificial neural networks (ANNs) and support vector machines (SVMs) were used to estimate a drying branch of a water retention curve. The paper compares the mentioned methods by estimating the water retention curves on regional scale for the Záhorská lowland in the Slovak Republic, where relatively small data set was available. The performance of the models was evaluated and compared. These computations did not fully confirm the superiority of SVMs over ANNs as is often proclaimed in the literature, because the results obtained show that in this study the ANN model performs somewhat better and is easier to handle in determining pedotransfer functions than the SVM models. Nevertheless, the results from both data-driven models are quite close, and the results show that they provide a significantly more precise outcome than a traditional multi-linear regression does., Autori sa v príspevku venujú určovaniu pedotransferových funkcií (PTF), ktoré umožňujú stanoviť body vlhkostných retenčných kriviek pôdy z ľahšie merateľných pôdnych vlastností a sú dôležitým prvkom modelovania vodného režimu pôdy. Ešte v minulej dekáde sa objavili snahy využívať na ich určenie umelé neurónové siete (UNS). Multi-layer perceptron (MLP) čiže viacvrstvový perceptrón je najčastejšie používaný model doprednej umelej neurónovej siete s kontrolovaným typom učenia. Vstupné signály prechádzajú sieťou typu MLP iba dopredným smerom, teda postupne od vrstvy k vrstve. MLP používa tri a viac vrstiev neurónov rozdelených na vstupnú, skrytú a výstupnú vrstvu s nelineárnou aktivačnou funkciou a vie rozpoznať alebo modelovať informácie, ktoré nie sú lineárne oddeliteľné alebo závislé. Novší vývoj v oblasti učiacich algoritmov poskytuje ďalšie možnosti, z ktorých sa v tomto príspevku venujeme tzv. mechanizmom podporných vektorov (Support Vector Machines - SVM). SVM využíva pri svojom kalibrovaní na riešený problém princíp tzv. štrukturálnej minimalizácie namiesto iba minimalizácie chyby - (Vapnik, 1995). Pri trénovaní siete MLP je jediným cieľom minimalizovať celkovú chybu. Pri SVM sa simultánne minimalizuje chyba aj zložitosť modelu. Použitie tohto princípu vedie zvyčajne k vyššej schopnosti generalizácie, t.j. umožneniu presnejších predpovedí pre dáta, ktoré neboli použité pri trénovaní SVM. Vhodnosť štandardnej umelej neurónovej siete, SVM a viacnásobnej lineárnej regresie sa v článku vyhodnocuje na základe údajov získaných z pôdnych vzoriek odobratých v lokalite Záhorskej nížiny. Pôvodné údaje a ich aplikáciu pri vyhodnocovaní vodného režimu pôd uvádza Skalová (2001, 2007), odkiaľ boli prevzaté vstupné dáta a to percentuálny obsah zrnitostných kategórií (I až IV podľa Kopeckého), redukovaná objemová hmotnosť (ρd) a vlhkosti pre vlkostné potenciály hw= -2.5, -56, -209, -558, -976, -3060, -15300 cm, ktoré boli stanovené laboratórne pre potreby určenia a testovania regresných závislostí. Vzhľadom na to, že pri odvodzovaní regionálnych PTF je častým prípadom nedostatok dát pre odvodenie dátovo riadených modelov, autori navrhli riešiť úlohu pomocou ansámblu MLP resp. SVM. Ansámbel dátovo riadených modelov bol vytvorený variabilným rozdelením údajov na trénovacie a validačné (validačnými údajmi sa testuje presnosť modelu vo fáze jeho tvorby, ešte sa používajú konečné testovacie dáta, ktoré neboli pri tvorbe modelu použité). Výsledky ukázali lepšie regresné schopnosti oboch dátovo riadených modelov (SVM aj MLP) voči multilineárnej regresii a o niečo lepšie výsledky boli získané z viacvrstvového perceptrónu než zo SVM., and Keďže v niektorých iných prácach mal zvyčajne vyššiu výpočtovú presnosť model založený na SVM než na UNS, autori odporúčajú pre budúci výskum preveriť vhodnosť kombinácie SVM a MLP modelov v dátovo riadenom skupinovom modeli.
This submission contains trained end-to-end models for the Neural Monkey toolkit for Czech and English, solving three NLP tasks: machine translation, image captioning, and sentiment analysis.
The models are trained on standard datasets and achieve state-of-the-art or near state-of-the-art performance in the tasks.
The models are described in the accompanying paper.
The same models can also be invoked via the online demo: https://ufal.mff.cuni.cz/grants/lsd
There are several separate ZIP archives here, each containing one model solving one of the tasks for one language.
To use a model, you first need to install Neural Monkey: https://github.com/ufal/neuralmonkey
To ensure correct functioning of the model, please use the exact version of Neural Monkey specified by the commit hash stored in the 'git_commit' file in the model directory.
Each model directory contains a 'run.ini' Neural Monkey configuration file, to be used to run the model. See the Neural Monkey documentation to learn how to do that (you may need to update some paths to correspond to your filesystem organization).
The 'experiment.ini' file, which was used to train the model, is also included.
Then there are files containing the model itself, files containing the input and output vocabularies, etc.
For the sentiment analyzers, you should tokenize your input data using the Moses tokenizer: https://pypi.org/project/mosestokenizer/
For the machine translation, you do not need to tokenize the data, as this is done by the model.
For image captioning, you need to:
- download a trained ResNet: http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz
- clone the git repository with TensorFlow models: https://github.com/tensorflow/models
- preprocess the input images with the Neural Monkey 'scripts/imagenet_features.py' script (https://github.com/ufal/neuralmonkey/blob/master/scripts/imagenet_features.py) -- you need to specify the path to ResNet and to the TensorFlow models to this script
Feel free to contact the authors of this submission in case you run into problems!
This submission contains trained end-to-end models for the Neural Monkey toolkit for Czech and English, solving four NLP tasks: machine translation, image captioning, sentiment analysis, and summarization.
The models are trained on standard datasets and achieve state-of-the-art or near state-of-the-art performance in the tasks.
The models are described in the accompanying paper.
The same models can also be invoked via the online demo: https://ufal.mff.cuni.cz/grants/lsd
In addition to the models presented in the referenced paper (developed and published in 2018), we include models for automatic news summarization for Czech and English developed in 2019. The Czech models were trained using the SumeCzech dataset (https://www.aclweb.org/anthology/L18-1551.pdf), the English models were trained using the CNN-Daily Mail corpus (https://arxiv.org/pdf/1704.04368.pdf) using the standard recurrent sequence-to-sequence architecture.
There are several separate ZIP archives here, each containing one model solving one of the tasks for one language.
To use a model, you first need to install Neural Monkey: https://github.com/ufal/neuralmonkey
To ensure correct functioning of the model, please use the exact version of Neural Monkey specified by the commit hash stored in the 'git_commit' file in the model directory.
Each model directory contains a 'run.ini' Neural Monkey configuration file, to be used to run the model. See the Neural Monkey documentation to learn how to do that (you may need to update some paths to correspond to your filesystem organization).
The 'experiment.ini' file, which was used to train the model, is also included.
Then there are files containing the model itself, files containing the input and output vocabularies, etc.
For the sentiment analyzers, you should tokenize your input data using the Moses tokenizer: https://pypi.org/project/mosestokenizer/
For the machine translation, you do not need to tokenize the data, as this is done by the model.
For image captioning, you need to:
- download a trained ResNet: http://download.tensorflow.org/models/resnet_v2_50_2017_04_14.tar.gz
- clone the git repository with TensorFlow models: https://github.com/tensorflow/models
- preprocess the input images with the Neural Monkey 'scripts/imagenet_features.py' script (https://github.com/ufal/neuralmonkey/blob/master/scripts/imagenet_features.py) -- you need to specify the path to ResNet and to the TensorFlow models to this script
The summarization models require input that is tokenized with Moses Tokenizer (https://github.com/alvations/sacremoses) and lower-cased.
Feel free to contact the authors of this submission in case you run into problems!
This paper describes an ongoing project that has the aim to develop a low cost application to replace a computer mouse for people with physical impairment. The application is based on an eye tracking algorithm and assu mes that the camera and the head position are fixed. Color tracking and template matching methods are used for pupil detection. Calibration is provided by neural networks as well as by parametric interpolation methods. Neural networks use back-propagation for learning and bipolar sigmoid function is chosen as the activation function. The user’s eye is scanned with a simple web camera with backlight compensation which is attached to a head fixation device. Neural networks significantly outperform parametric interpolation techniques: 1) the calibration proc edure is faster as they require less calibration marks and 2) cursor control is more precise. The system in its current stage of de velopment is able to distinguish regions at least on the level of desktop icons. The main limitation of the proposed method is the lack of head-pose invariance and its relative sensitivity to illumination (especially to incidental pupil reflections)., E. Demjén, V. Aboši, Z. Tomori., and Obsahuje bibliografii a bibliografické odkazy