Support Vector Machines (SVM) are well known as a kernel based method mostly applied to classification. SVM-Recursive Feature Elimination (SVM- RFE) is a variable ranking and selection method dedicated to the design of SVM based classifiers. In this paper, we propose to revisit the SVM-RFE method. We study two implementations of this feature selection method that we call External SVM-RFE and Internal SVM-RFE, respectively. The two implementations are applied to rank and select acoustic features extracted from speech to design optimized linear SVM classifiers that recognize speaker emotions. To show the efficiency of the External and Internal SVM-RFE methods, an extensive experimental study is presented. The SVM classifiers were selected using a validation procedure that ensures strict speaker independence. The results are discussed and compared with those achieved when the features are ranked using the Gram-Schmidt procedure. Overall, the results achieve a recognition rate that exceeds 90%.