In the paper, we focus on reasoning with IF-THEN rules in propositional fragment of predicate calculus and on its modeling with neural networks. At first, IF-THEN deduction from facts is defined. Then it is proved that for any non-contradictory set of IF-THEN rules and literals (representing facts) there exists a layered recurrent network with 2 hidden layers that can specify all IF-THEN deducible literals. If we denote the set of all literal IF-THEN consequences as D_0 and the set of all literal logical consequences as D, then obviously D_0 \subset D. Thus, D_0 can be considered to be an approximation of D. Using the designed network for simulation of contradiction proof, the approximation D_0 may be easily refined. Furthermore, the network may also be used for determination of D. However, the algorithm that realizes necessary network computations has exponential complexity.