In this short note, we introduce a new architecture for spiking perceptron: The actual output is a linear combination of the firing time of the perceptron and the spiking intensity (the gradient of the state function) at the firing time. It is shown by numerical experiments that this novel spiking perceptron can solve the XOR problem, while a classical spiking neuron usually needs a hidden layer to solve the XOR problem.
This paper considers a fuzzy perceptron that has the same topological structure as the conventional linear perceptron. A learning algorithm based on a fuzzy δ rule is proposed for this fuzzy perceptron. The inner operations involved in the working process of this fuzzy perceptron are based on the max-min logical operations rather than conventional multiplication and summation, etc. The initial values of the network weights are fixed as 1. It is shown that each network weight is non-increasing in the training process and remains unchanged once it is less than 0.5. The learning algorithm has an advantage, as proved in this paper, that it converges in a finite number of steps if the training patterns are fuzzily separable. This result generalizes a corresponding classical result for conventional linear perceptrons. Some numerical experiments for the learning algorithm are provided to support our theoretical findings.