In this paper, threshold voltage modeling based on neural networks is presented. The database was obtained by performing DC analysis with possible combinations of MOSFETs terminal voltages and channel widths which directly effect threshold voltage values in submicron technology. The neural network was trained with the database including 0.25 ɲm and 0.40 ɲm TSMC process parameters. In order to prove the extrapolation ability, the test dataset is constituted with 0.18 ɲm TSMC process parameters, which were not applied to the neural network for training. The test results of neural network tool are compared with the data obtained by using the Cadence simulation tool. The excellent agreement between the experimental and the model results makes neural networks a powerful tool for estimation of the threshold voltage values.
To increase the computing speed of neural networks by means of parallel performance, a new mode of neural network, named Tri-state Cascading Pulse Coupled Neural Network (TCPCNN), is presented in this paper, which takes the ideas of three-state and pipelining used in circuit designing into neural network, and creates new neuron with three states: sub-firing, firing and inhibition. The proposed model can transmit signals in parallel way, as it is inspired not only in the direction of auto-wave propagation but also in its transverse direction in neural network. In this paper, TCPCNN is applied to find the shortest path, and the experimental results indicate that the algorithm has lower computational complexity, higher accuracy, and secured full-scale searching. Furthermore, it has little dependence on initial conditions and parameters. The algorithm is tested by some experiments, and its results are compared with some other classical algorithms --- Dijkstra algorithm, Bellman-Ford algorithm and a new algorithm using pulse coupled neural networks.
Several algorithms have been developed for time series forecasting. In this paper, we develop a type of algorithm that makes use of the numerical methods for optimizing on objective function that is the Kullbak-Leibler divergence between the joint probability density function of a time series xi, X2, Xn and the product of their marginal distributions. The Grani-charlier expansion is ušed for estimating these distributions.
Using the weights that have been obtained by the neural network, and adding to them the Kullback-Leibler divergence of these weights, we obtain new weights that are ušed for forecasting the new value of Xn+k.