This work concerns a class of discrete-time, zero-sum games with two players and Markov transitions on a denumerable space. At each decision time player II can stop the system paying a terminal reward to player I and, if the system is no halted, player I selects an action to drive the system and receives a running reward from player II. Measuring the performance of a pair of decision strategies by the total expected discounted reward, under standard continuity-compactness conditions it is shown that this stopping game has a value function which is characterized by an equilibrium equation, and such a result is used to establish the existence of a Nash equilibrium. Also, the method of successive approximations is used to construct approximate Nash equilibria for the game.