Recurrent neural networks (RNNs) have much larger potential than
classical feed-forward neural networks. Their output responses depend also on the time position of a given input and they can be successfully used in spatio-temporal task Processing. RNNs are often used in the cognitive science community to process symbol sequences that represent various natural language structures. Usually they are trained by common gradient-based algorithms such as real time recurrent learning (RTRL) or backpropagation through time (BPTT). This work compares the RTRL algorithm that represents gradient based approaches with extended Kalman filter (EKF) methodology adopted for training the Elman’s simple recurrent network (SRN). We used data sets containing recursive structures inspired by studies of cognitive science community and trained SRN for the next symbol prediction task. The EKF approach, although computationally more expensive, shows higher robustness and the resulting next symbol prediction performance is higher.