Ohio State is in the process of revising websites and program materials to accurately reflect compliance with the law. While this work occurs, language referencing protected class status or other activities prohibited by Ohio Senate Bill 1 may still appear in some places. However, all programs and activities are being administered in compliance with federal and state law.

Quantitative Psychology Brownbag

Marco Chen
Mon, February 6, 2023
12:30 pm - 1:30 pm
Zoom

 

ZOOM link 

 

Marco Chen
Department of Psychology & Neuroscience
University of North Carolina at Chapel Hill

Title: Modeling Growth When Measurement Properties Change Between Persons and Within Persons Over Time: A Bayesian Regularized Second-Order Growth Curve Model

Abstract: 

A common interest in educational and psychological measurement is to examine change over time through longitudinal assessments. To accurately capture true change in an underlying construct of interest, we must also account for changes in the way the construct manifests itself over time. One essential approach is longitudinal measurement models that analyze construct change over time and evaluate item characteristics at each timepoint. However, limitations exist for traditional longitudinal measurement and second-order growth models, such as an inability to incorporate time-varying covariates (TVC) that possess different values among individuals at a given timepoint. We propose an alternative model drawing on the advantages of regularized moderated nonlinear factor analysis (MNLFA; Bauer et al., 2021). This setup follows the MNLFA framework in using covariate moderation on item parameters to represent differential item functioning (DIF). The proposed model is more parsimonious than the traditional second-order growth model and one of the first setups to estimate DIF effects from both time-varying and time-invariant covariates. Additionally, this model can address DIF effects from multiple covariates simultaneously without imposing a priori item equality constraints. It does so by applying Bayesian regularization to DIF effects and identifying the model without using anchor items (Chen et al., 2022). The current study evaluates the performance of the proposed regularized longitudinal MNLFA model through a simulation and presents an empirical example on adolescent delinquency and early alcohol use. This study demonstrates the feasibility and importance of including both time-varying and -invariant covariate effects in longitudinal measurement evaluation and growth modeling.

 

 

Marco Chen earned his B.S. in Commerce from University of Virginia and Ph.D. in Quantitative Psychology along with an M.S. in Statistics at University of North Carolina at Chapel Hill. His research broadly explores how to model unobserved psychological constructs in a valid and generalizable way for samples that are heterogenous with respect to background characteristics, such as gender, age, and race. His graduate work with Dr. Daniel Bauer developed latent variable models to evaluate measurement scale items for any differential effectiveness in reflecting the target unobservable construct across levels of background covariates, i.e., measurement bias. His proposed model can account for individual background differences that can influence observed responses irrespective of actual disparities in the underlying constructs of interest, thus improving the validity and fairness of scores on the constructs. This model can be applied to evaluate many items over categorical and continuous background covariates simultaneously and result in superior detection rates of measurement bias. His dissertation work applied a statistical learning method called Bayesian regularization to distinguish change in item measurement effectiveness from change in the true construct across individual ages and other time-varying covariates in longitudinal measurement data. His postdoctoral work will investigate the extent to which university exam item construction could influence how students from different backgrounds respond to the questions. His projects will apply his measurement modeling skills to quantify the influence of item design on exam performance across demographic groups and ultimately promote guidelines of equitable exam design. His work will also consider the general methodological implications of measurement item bias in replicating psychology studies.