Volume 5 • Issue 3 | November 2021

The APP procedure for estimating the Cohen's effect size

Xiangfei Chen, David Trafimow, Tonghui Wang, Tingting Tong, and Cong Wang

Abstract:

Purpose
The authors derive the necessary mathematics, provide computer simulations, provide links to free and user-friendly computer programs, and analyze real data sets.

Design/methodology/approach
Cohen's d, which indexes the difference in means in standard deviation units, is the most popular effect size measure in the social sciences and economics. Not surprisingly, researchers have developed statistical procedures for estimating sample sizes needed to have a desirable probability of rejecting the null hypothesis given assumed values for Cohen's d, or for estimating sample sizes needed to have a desirable probability of obtaining a confidence interval of a specified width. However, for researchers interested in using the sample Cohen's d to estimate the population value, these are insufficient. Therefore, it would be useful to have a procedure for obtaining sample sizes needed to be confident that the sample. Cohen's d to be obtained is close to the population parameter the researcher wishes to estimate, an expansion of the a priori procedure (APP). The authors derive the necessary mathematics, provide computer simulations and links to free and user-friendly computer programs, and analyze real data sets for illustration of our main results.

Findings
In this paper, the authors answered the following two questions: The precision question: How close do I want my sample Cohen's d to be to the population value? The confidence question: What probability do I want to have of being within the specified distance?

Originality/value
To the best of the authors’ knowledge, this is the first paper for estimating Cohen's effect size, using the APP method. It is convenient for researchers and practitioners to use the online computing packages.

References:

  1. Bhandari, P. (2020), Effect Size in Statistics-Scribbr, available at: https://www.scribbr.com/statistics/effect-size/.
  2. Cohen, J. (1988), Statistical Power Analysis for the Behavioral Sciences, 2nd ed., Erlbaum, Hillsdale, NJ.
  3. Champely, S. (2018), “PairedData: paired data analysis. R pakage version: 1.1.1”, available at: https://CRAN.R-project.org/package=PairedData.
  4. Li, H., Trafimow, D., Wang, T., Wang, C. and Hu, L. (2020), “User-friendly computer programs so econometricians can run the a priori procedure”, Frontiers in Management and Business, Vol. 1 No. 1, pp. 2-6, doi: 10.25082/FMB.2020.01.002.
  5. Nguyen, H.T. and Wang, T. (2008), A Graduate Course in Probability and Statistics, Vol. 2, Tsinghua University Press, Beijing.
  6. Open Science Collaboration (2015), “Estimating the reproducibility of psychological science”, Science, Vol. 349 No. 6251, p. aac4716, doi: 10.1126/science.aac4716.
  7. Schafer, T. and Schwarz, M. (2019), “The meaningfulness of effect sizes in psychological research: differences between sub-disciplines and the impact of potential biases”, Frontiers in Psychology, Vol. 10, p. 813. doi: 10.3389/fpsyg.2019.00813.
  8. Sullivan, G.M. and Feinn, R. (2012), “Using effect size-or why the p value is not enough”, Journal of Graduate Medical Education, Vol. 4 No. 3, pp. 279-282, doi: 10.4300/JGME-D-12-00156.1.
  9. Trafimow, D. (2017), “Using the coefficient of confidence to make the philosophical switch from a posteriori to a priori inferential statistics”, Educational and Psychological Measurement, Vol. 77 No. 5, pp. 831-854, doi: 10.1177/0013164416667977.
  10. Trafimow, D. (2019), “A frequentist alternative to significance testing, p-values, and confidence intervals”, Econometrics, Vol. 7 No. 2, pp. 1-14, available at: https://www.mdpi.com/2225-1146/7/2/26.
  11. Trafimow, D. and MacDonald, J.A. (2017), “Performing inferential statistics prior to data collection”, Educational and Psychological Measurement, Vol. 77 No. 2, pp. 204-219, doi: 10.1177/0013164416659745.
  12. Trafimow, D. and Myüz, H.A. (2019), “The sampling precision of research in five major areas of psychology”, Behavior Research Methods, Vol. 51 No. 5, pp. 2039-2058, doi: 10.3758/s13428-018-1173-x.
  13. Trafimow, D. and Uhalt, J. (2020), “The inaccuracy of sample-based confidence intervals to estimate a priori ones”, Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, Vol. 16 No. 2, pp. 112-126, doi: 10.5964/meth.2807.
  14. Trafimow, D., Wang, C. and Wang, T. (2020a), “Making the a priori procedure (APP) work for differences between means”, Educational and Psychological Measurement, Vol. 80 No. 1, pp. 186-198, doi: 10.1177/0013164419847509.
  15. Trafimow, D.T., Hyman, M.R. and Kostyk, A. (2020b), “The (im)precision of scholarly consumer behavior research”, Journal of Business Research, Vol. 114, pp. 93-101, doi: 10.1016/j.jbusres.2020.04.008.
  16. Wang, C., Wang, T., Trafimow, D. and Talordphop, K. (2020), “Extending the a priori procedure to one-way analysis of variance model with skew normal random effects”, Asian Journal of Economics and Banking, Vol. 4 No. 2, pp. 77-90.
  17. Wang, C., Wang, T., Trafimow, D., Li, H., Hu, L. and Rodriguez, A. (2021), “Extending the A Priori procedure (APP) to address correlation coefficients”, in Ngoc Thach, N., Kreinovich, V. and Trung, N.D. (Eds), Data Science For Financial Econometrics, Springer. doi: 10.1007/978-3-030-48853-610.
  18. Wei, Z., Wang, T., Trafimow, D. and Talordphop, K. (2020), “Extending the a priori procedure to normal Bayes models”, International Journal of Intelligent Technologies and Applied Statistics, Vol. 13 No. 2, pp. 169-183, doi: 10.6148/IJITAS.202006-13(2).0004.

Further reading

  1. Baguley, T. (2009), “Standardized or simple effect size: what should be reported?”, British Journal of Psychology, Vol. 100 No. Pt 3, pp. 603-617.
  2. Bobko, P., Roth, P.L. and Bobko, C. (2001), “Correcting the effect size of d for range restriction and unreliability”, Organizational Research Methods, Vol. 4 No. 1, pp. 46-61.
  3. Cohen, J. (2013), Statistical Power Analysis for the Behavioral Sciences, Elsevier Science, United Kingdom.
  4. Demidenko, E. (2016), “The p-value you can't buy”, The American Statistician, Vol. 70 No. 1, pp. 33-38, doi: 10.1080/00031305.2015.1069760.
  5. Gillett, R. (2003), “The metric comparability of meta-analytic effect-size estimators from factorial designs”, Psychological Methods, Vol. 8 No. 4, pp. 419-433.
  6. Hedges, L.V. (1981), “Distribution theory for Glass's estimator of effect size and related estimators”, Journal of Educational Statistics, Vol. 6 No. 2, p. 107, doi: 10.2307/1164588.
  7. Wang, C., Wang, T., Trafimow, D. and Chen, J. (2019), “Extending a priori procedure to two independent samples under skew normal settings”, Asian Journal of Economics and Banking, Vol. 3 No. 2, pp. 29-40.