The aim of the article is to quantify how often in leading Czech social-science journals (Československá psychologie / Czechoslovak Psychology, Pedagogika/Pedagogy, and Sociologický časopis / Czech Sociological Review) authors choose the wrong procedures to analyse quantitative data. In particular, attention is focused on the incorrect choice of statistical tests, their misinterpretation and mechanical application, and the use of effect sizes, that are so highly recommended nowadays. The basic research period was ten years, from 2005 to 2014, and for the Czech Sociological Review the period was extended back to 1995. The results of the content analysis of published articles (N=363) show that statistical tests are applied quite often to data that are not suitable for statistical tests: this is found in about one-fifth of cases in Czech Sociological Review, one-half in Pedagogy, and more than three-quarters in Czechoslovak Psychology. In addition, authors often make mechanical use of statistical methods or make incorrect interpretations (in over 40% of articles in the Czech Sociological Review over the last 10 years) and there are rarely any substantive interpretations of results (especially in Czechoslovak Psychology). Effect sizes are applied relatively often, but there are also gaps in their usage. It is clear from the results that changes are necessary both in the teaching of quantitative methodology and publishing practices in this subject area.