This paper deals with four types of point estimators based on minimization of information-theoretic divergences between hypothetical and empirical distributions. These were introduced
\begin{enumerate} \item[(i)] by Liese and Vajda \cite{9} and independently Broniatowski and Keziou \cite{3}, called here \textsl{power superdivergence estimators, } \item[(ii)] by Broniatowski and Keziou \cite{4} , called here \textsl{power subdivergence estimators, } \item[(iii)] by Basu et al. \cite{2}, called here \textsl{power pseudodistance estimators, }and \item[(iv)] by Vajda \cite{18} called here \textsl{Rényi pseudodistance estimators.} \end{enumerate}
These various criterions have in common to eliminate all need for grouping or smoothing in statistical inference. The paper studies and compares general properties of these estimators such as Fisher consistency and influence curves, and illustrates these properties by detailed analysis of the applications to the estimation of normal location and scale.