https://qcmb.psychopen.eu/index.php/qcmb/issue/feed Quantitative and Computational Methods in Behavioral Sciences 2024-12-23T05:32:26-08:00 Georg Timo von Oertzen editors@qcmb.psychopen.eu Open Journal Systems <h1>Quantitative and Computational Methods in Behavioral Sciences</h1> <h2 class="mt-0">An online-only, open-access journal for the development of methods in psychology and related fields — <em>Free of charge for authors and readers</em></h2> <hr> <p><img class="mr-4 mb-3 border" style="float: left;" src="/public/journals/7/qcmb_cover.png" alt="Cover" width="226" height="320">We strive to foster the development of methods in psychology and related fields. With this aim, we publish scientific articles that extend the understanding of foundational mathematics used in psychological methods, development of new methods and software/hardware used by these, comparison of new or existing methods, and dissemination of this knowledge to a broader audience in psychology and related fields.&nbsp;</p> <p>Articles are published in two sections: The fundamental research section targets an audience of quantitative psychologists, mathematicians, and statisticians with an interest in psychological applications of computational, statistical, and mathematical models and the method dissemination section, which concentrates on methodological articles that target an audience of social scientists that want to apply top-notch analysis methods.&nbsp;</p> <p>We are dedicated to Open Science: All published articles are openly available for free. There is no publication fee for the review process or the publication. All articles are available as pre-prints as soon as they go into the review process, which keeps them available regardless of the formal decision.</p> https://qcmb.psychopen.eu/index.php/qcmb/article/view/13059 Informative Hypothesis Testing in the EffectLiteR Framework: A Tutorial 2024-12-23T05:32:25-08:00 Caroline Keck Yves.Rosseel@UGent.be Axel Mayer Yves.Rosseel@UGent.be Yves Rosseel Yves.Rosseel@UGent.be <p>In this paper, we illustrate how the typical workflow in analyzing psychological data, including analysis of variance and null hypothesis significance testing, may fail to bridge the gap between research questions and statistical procedures. It fails, because it does not provide us with the quantities of interest, which are often average and conditional effects, and it is insufficient, because it does not take the expectations of the researcher about these quantities into account. Using a running example, we demonstrate that the EffectLiteR framework as well as informative hypothesis testing are more suitable to narrow the gap between research questions and statistical procedures. Furthermore, we provide two empirical data examples, one in the context of linear regression and one in the context of the generalized linear model, to further illustrate the use of informative hypothesis testing in the EffectLiteR framework.</p> 2024-12-23T00:00:00-08:00 Copyright (c) 2024 Caroline Keck, Axel Mayer, Yves Rosseel https://qcmb.psychopen.eu/index.php/qcmb/article/view/12069 Independent Validation as a Validation Method for Classification 2023-12-22T02:38:04-08:00 Tina Braun tina.braun@charlotte-fresenius-uni.de Hannes Eckert tina.braun@charlotte-fresenius-uni.de Timo von Oertzen tina.braun@charlotte-fresenius-uni.de <p>The use of classifiers provides an alternative to conventional statistical methods. This involves using the accuracy with which data is correctly assigned to a given group by the classifier to apply tests to compare the performance of classifiers. The conventional validation methods for determining the accuracy of classifiers have the disadvantage that the distribution of correct classifications does not follow any known distribution, and therefore, the application of statistical tests is problematic. Independent validation circumvents this problem and allows the use of binomial tests to assess the performance of classifiers. However, independent validation accuracy is subject to bias for small training datasets. The present study shows that a hyperbolic function can be used to estimate the loss in classifier accuracy for independent validation. This function is used to develop three new methods to estimate the classifier accuracy for small training sets more precisely. These methods are compared to two existing methods in a simulation study. The results indicate overall small errors in the estimation of classifier accuracy and indicate that independent validation can be used with small samples. A least square estimation approach seems best suited to estimate the classifier accuracy.</p> 2023-12-22T00:00:00-08:00 Copyright (c) 2023 Tina Braun, Hannes Eckert, Timo von Oertzen https://qcmb.psychopen.eu/index.php/qcmb/article/view/10087 Estimating Item Parameters in Multistage Designs With the tmt Package in R 2024-02-15T03:05:34-08:00 Jan Steinfeld jan.d.steinfeld@gmail.com Alexander Robitzsch jan.d.steinfeld@gmail.com <p>Various likelihood-based methods are available for the parameter estimation of item response theory models (IRT), leading to comparable parameter estimates. Considering multistage testing (MST) designs, Glas (1988; https://doi.org/10.2307/1164950) stated that the conditional maximum likelihood (CML) method in its original formulation leads to severely biased parameter estimates. A modified CML estimation method for MST designs proposed by Zwitser and Maris (2015; https://doi.org/10.1007/s11336-013-9369-6) finally provides asymptotically unbiased item parameter estimates. Steinfeld and Robitzsch (2021b; https://doi.org/10.31234/osf.io/ew27f) complemented this method to MST designs with probabilistic routing strategies. For both proposed modifications additional software solutions are required since design-specific information must be incorporated into the estimation process. An R package that has implemented both modifications is "tmt". In this article, first, the proposed solutions of the CML estimation in MST designs are illustrated, followed by the main part, the demonstration of the CML item parameter estimation with the R package "tmt". The demonstration includes the process of model specification, data simulation, and item parameter estimation, considering two different routing types of deterministic and probabilistic MST designs.</p> 2023-11-06T00:00:00-08:00 Copyright (c) 2023 Jan Steinfeld, Alexander Robitzsch https://qcmb.psychopen.eu/index.php/qcmb/article/view/8383 Chain Graph Reduction Into Power Chain Graphs 2023-03-09T03:14:46-08:00 Vithor Rosa Franco vithorfranco@gmail.com Guilherme Wang Barros vithorfranco@gmail.com Marie Wiberg vithorfranco@gmail.com Jacob Arie Laros vithorfranco@gmail.com <p>Reduction of graphs is a class of procedures used to decrease the dimensionality of a given graph in which the properties of the reduced graph are to be induced from the properties of the larger original graph. This paper introduces both a new method for reducing chain graphs to simpler directed acyclic graphs (DAGs), that we call power chain graphs (PCG), as well as a procedure for structure learning of this new type of graph from correlational data of a Gaussian graphical model. A definition for PCGs is given, directly followed by the reduction method. The structure learning procedure is a two-step approach: first, the correlation matrix is used to cluster the variables; and then, the averaged correlation matrix is used to discover the DAGs using the PC-stable algorithm. The results of simulations are provided to illustrate the theoretical proposal, which demonstrate initial evidence for the validity of our procedure to recover the structure of power chain graphs. The paper ends with a discussion regarding suggestions for future studies as well as some practical implications.</p> 2022-12-22T00:00:00-08:00 Copyright (c) 2022 Vithor Rosa Franco, Guilherme Wang Barros, Marie Wiberg, Jacob Arie Laros https://qcmb.psychopen.eu/index.php/qcmb/article/view/6205 Quantitative Psychology as Mediator Between Mathematics and Psychology 2021-05-11T01:44:02-07:00 Timo von Oertzen timo@unibw.de 2021-05-11T00:00:00-07:00 Copyright (c) 2021 Timo von Oertzen https://qcmb.psychopen.eu/index.php/qcmb/article/view/2979 A Tutorial for Joint Modeling of Longitudinal and Time-to-Event Data in R 2021-05-11T01:44:07-07:00 Sezen Cekic sezen.cekic@unige.ch Stephen Aichele saichele@gmail.com Andreas M. Brandmaier brandmaier@mpib-berlin.mpg.de Ylva Köhncke koehncke@mpib-berlin.mpg.de Paolo Ghisletta Paolo.Ghisletta@unige.ch <p>In biostatistics and medical research, longitudinal data are often composed of repeated assessments of a variable and dichotomous indicators to mark an event of interest. Consequently, joint modeling of longitudinal and time-to-event data has generated much interest in these disciplines over the previous decade. In behavioural sciences, too, often we are interested in relating individual trajectories and discrete events. Yet, joint modeling is rarely applied in behavioural sciences more generally. This tutorial presents an overview and general framework for joint modeling of longitudinal and time-to-event data, and fully illustrates its application in the context of a behavioral study with the JMbayes R package. In particular, the tutorial discusses practical topics, such as model selection and comparison, choice of joint modeling parameterization and interpretation of model parameters. In the end, this tutorial aims at introducing didactically the theory related to joint modeling and to introduce novice analysts to the use of the JMbayes package.</p> 2021-05-11T00:00:00-07:00 Copyright (c) 2021 Sezen Cekic, Stephen Aichele, Andreas M. Brandmaier , Ylva Köhncke, Paolo Ghisletta https://qcmb.psychopen.eu/index.php/qcmb/article/view/3783 A Note on a Computationally Efficient Implementation of the EM Algorithm in Item Response Models 2021-05-11T01:44:28-07:00 Alexander Robitzsch robitzsch@leibniz-ipn.de <p>This note sketches two computational shortcuts for estimating unidimensional item response models and multidimensional item response models with between-item dimensionality utilizing an expectation-maximization (EM) algorithm that relies on numerical integration with fixed quadrature points. It is shown that the number of operations required in the E-step can be reduced in situations of many cases and many items by appropriate shortcuts. Consequently, software implementations of a modified E-step in the EM algorithm could benefit from gains in computation time.</p> 2021-05-11T00:00:00-07:00 Copyright (c) 2021 Alexander Robitzsch https://qcmb.psychopen.eu/index.php/qcmb/article/view/3763 A Reproducible Data Analysis Workflow With R Markdown, Git, Make, and Docker 2021-05-11T01:44:18-07:00 Aaron Peikert peikert@mpib-berlin.mpg.de Andreas M. Brandmaier brandmaier@mpib-berlin.mpg.de <p>In this tutorial, we describe a workflow to ensure long-term reproducibility of R-based data analyses. The workflow leverages established tools and practices from software engineering. It combines the benefits of various open-source software tools including R Markdown, Git, Make, and Docker, whose interplay ensures seamless integration of version management, dynamic report generation conforming to various journal styles, and full cross-platform and long-term computational reproducibility. The workflow ensures meeting the primary goals that 1) the reporting of statistical results is consistent with the actual statistical results (dynamic report generation), 2) the analysis exactly reproduces at a later point in time even if the computing platform or software is changed (computational reproducibility), and 3) changes at any time (during development and post-publication) are tracked, tagged, and documented while earlier versions of both data and code remain accessible. While the research community increasingly recognizes dynamic document generation and version management as tools to ensure reproducibility, we demonstrate with practical examples that these alone are not sufficient to ensure long-term computational reproducibility. Combining containerization, dependence management, version management, and dynamic document generation, the proposed workflow increases scientific productivity by facilitating later reproducibility and reuse of code and data.</p> 2021-05-11T00:00:00-07:00 Copyright (c) 2021 Aaron Peikert, Andreas M. Brandmaier