Classifier combining is a popular technique for improving classification quality. Common methods for classifier combining can be further improved by using dynamic classification confidence measures which adapt to the currently classified pattern. However, in the case of dynamic classifier systems, the classification confidence measures need to be studied in a broader context as we show in this paper, the degree of consensus of the whole classifier team plays a key role in the process. We discuss the properties which should hold for a good confidence measure, and we define two methods for predicting the feasibility of a given classification confidence measure to a given classifier team and given data. Experimental results on 6 artificial and 20 real-world benchmark datasets show that for both methods, there is a statistically significant correlation between the feasibility of the measure, and the actual improvement in classification accuracy of the whole classifier system; therefore, both feasibility measures can be used in practical applications to choose an optimal classification confidence measure.