This paper focuses on gradient-based backpropagation algorithms that use either a common adaptive learning rate for all weights or a separate adaptive learning rate for each weight. The learning-rate adaptation is based on descent techniques and estimates of the local constants that are obtained without additional error function and gradient evaluations. This paper proposes three algorithms to improve the different versions of backpropagation training in terms of both convergence rate and convergence characteristics, such as stable learning and robustness to oscillations. The new modification consists of a simple change in the error signal function. Experiments are conducted to compare and evaluate the convergence behavior of these gradient-based training algorithms with three training problems: XOR, encoding problem and character recognition, which are popular training problems.