The use of latent variable mixture models to identify invariant items in test construction

Purpose Patient-reported outcome measures (PROMs) are frequently used in heterogeneous patient populations. PROM scores may lead to biased inferences when sources of heterogeneity (e.g., gender, ethnicity, and social factors) are ignored. Latent variable mixture models (LVMMs) can be used to examine measurement invariance (MI) when sources of heterogeneity in the population are not known a priori. The goal of this article is to discuss the use of LVMMs to identify invariant items within the context of test construction. Methods The Draper-Lindely-de Finetti (DLD) framework for the measurement of latent variables provides a theoretical context for the use of LVMMs to identify the most invariant items in test construction. In an expository analysis using 39 items measuring daily activities, LVMMs were conducted to compare 1- and 2-class item response theory models (IRT). If the 2-class model had better fit, item-level logistic regression differential item functioning (DIF) analyses were conducted to identify items that were not invariant. These items were removed and LVMMs and DIF testing repeated until all remaining items showed MI. Results The 39 items had an essentially unidimensional measurement structure. However, a 1-class IRT model resulted in many statistically significant bivariate residuals, indicating suboptimal fit due to remaining local dependence. A 2-class LVMM had better fit. Through subsequent rounds of LVMMs and DIF testing, nine items were identified as being most invariant. Conclusions The DLD framework and the use of LVMMs have significant potential for advancing theoretical developments and research on item selection and the development of PROMs for heterogeneous populations.


Introduction
Factor analysis and item response theory (IRT) methods are established methods for item selection in test construction for quality of life and patient-reported outcomes measures (PROMs) [1]. These methods focus on the dimensionality of a set of candidate items, where the goal is to identify those items that conform to a hypothesized and theoretically defensible dimensional structure. Measurement invariance is another important psychometric criterion that pertains to the equivalence of measurement model parameters across different subgroups of people in the population. This is particularly important when instruments are to be used in potentially heterogeneous populations of people who may differ in how they interpret and respond to questions about their health and quality of life. If the differences are caused by factors that are unrelated to the construct of interest, a test (i.e., measurement instrument) may produce biased scores. For example, if some respondents provide lower ratings for a general health item because they have difficulty in reading and understanding the item, their scores will be influenced by literacy, whereas the scores of others who have no difficulty in reading and understanding the item will not. This may in turn lead to incorrect inferences about the meaning of the scores, which are assumed to reflect only the construct of interest.
Several authors have argued for the importance of examining measurement invariance in test construction [2][3][4][5]. However, a particular challenge during test construction is that it is often not known a priori what characteristics of a population result in a lack of measurement invariance. In those situations, conventional approaches for examining measurement invariance with respect to selected manifest variables [6] will be of limited use. Latent variable mixture models (LVMMs) have been proposed to address this challenge; they can be used to examine measurement invariance with respect to two or more latent (i.e., unobserved) classes [7][8][9].
In this paper, we propose and describe the use of LVMMs to guide the identification of invariant items in test construction. We first introduce the Draper-Lindley-de Finetti (DLD) framework of latent variable measurement as a useful theoretical context [10,11]. We then discuss how LVMMs could be used to assess measurement invariance. The methodological approach for using LVMMs in the context of test construction is discussed next. This is followed by a brief expository analysis demonstrating the approach using an existing item bank for the measurement of daily activities.

Theoretical context
The DLD framework relates the measurement of latent variables to two necessary conditions pertaining to the exchangeability of both measurement items and sampling units (i.e., people or groups of people) [10,11]. The first condition is that the items must be exchangeable such that they covary in a manner that is congruent with the measurement structure. Here, exchangeability refers to the notion that the items of a test are assumed to be drawn from a hypothetical pool of all possible representative items measuring the construct of interest (i.e., their dependencies are due only to the construct). The second condition is that the sampling units in the target population must be exchangeable such that the measurement model parameters are equivalently applicable to all individuals. These conditions reflect the fundamental assumption of local independence [12,13], which requires that (a) dimensionality among the items is accurately represented in the measurement structure, and (b) item responses provided by individuals, or groups of individuals, are independent from those provided by other individuals in the target population. In other words, violations of local independence may be due to heterogeneity among the items or heterogeneity within the sample [14].
The DLD framework further relates the conditions of exchangeability of items and sampling units to the types of inferences that can be made based on test (e.g., PROM) scores [11]. In so doing, it provides an important basis for measurement validation, where the focus is on the validity of inferences (including actions and decisions) that are made on test scores [15]. The particular inferences of interest here pertain to the extent to which a pool of items consistently reflects a latent variable in a potentially heterogeneous population. Exchangeability of items is necessary to warrant inferences about the test scores irrespective of the combination of items that are administered. In the DLD framework, this is referred to as ''specific domain inference'' [11], which is particularly important when there are different versions of a measurement instrument (e.g., short forms) or when people are exposed to different measurement items (e.g., in computerized adaptive testing). Exchangeability of sampling units refers to the homogeneity of the population. This condition is necessary to warrant ''specific sampling inference'' [11] based on a measurement structure and estimated parameters that are equivalently applicable (i.e., invariant) across different subgroups in the population.
A variety of statistical methods are available for examining each condition. The first condition, exchangeability of items, relates to the dimensional structure of a set of measurement items. Unidimensionality implies that the items are exchangeable with respect to a single latent variable; that is, their covariances are fully accounted for by the latent variable. Factor analysis and item response theory (IRT) methods are widely used to evaluate this condition during the process of test construction [1,16]. Items that conform to a hypothesized and theoretically defensible dimensional structure are retained, while those that do not (e.g., do exhibit small factor loadings or discrimination parameters, cross-loadings on other dimensions, poor internal consistency reliability, etc.) may be removed or revised, unless there are other reasons for retention.
The second condition, exchangeability of sampling units, relates to the degree to which residual covariances among items are explained by differences among individuals within the sample. Differential item functioning (DIF) methods are used to examine this condition by determining the invariance of item parameters with respect to various observed groups in the target population, such as those characterized by differences in demographic variables (gender, age, ethnicity) or various health-related variables (e.g., having one or more medical conditions). Examples of DIF techniques include multigroup confirmatory factor analysis [17,18], the Mantel-Haenszel procedure [19], logistic regression models [20][21][22], multidimensionalitybased procedures [23] such as the simultaneous item bias test (SIBTEST) [24], and IRT DIF analysis techniques [25][26][27][28]. In summary, the DLD framework provides a useful theoretical context for test construction by drawing our attention to statistical conditions focusing on exchangeability of both items and sampling units. A predominant focus in test construction has been on the exchangeability of items by examining dependencies among items to inform item selection. The DLD framework provides the rationale for also focusing on the exchangeability of sampling units by considering the extent to which the measurement model parameters of individual items are equivalent, or invariant, across population subgroups. If the goal is to construct a measure that is broadly applicable in a general population, it is important to identify those items for which the parameters are most invariant. However, a limitation of conventional DIF techniques for the assessment of measurement invariance is that the relevant sources of DIF in the target population must be known a priori [14,[29][30][31]. As a result, DIF analyses will only be as good as the selection of observed variables that represent sources of DIF, which are unlikely to fully capture population heterogeneity [29,30]. This limitation is of particular concern when measurement instruments are used in large and potentially heterogeneous populations where the measurement model parameters are assumed to be invariant irrespective of any differences, known and unknown, in the target population. LVMMs are increasingly recommended to address this limitation by examining measurement invariance with respect to subgroups that are not specified a priori [7,11,14,[31][32][33].

LVMMs for examining measurement invariance
LVMMs allow for the simultaneous modeling of continuous latent variables that represent dependencies among measurement items (exchangeability of items), and latent classes that accommodate dependencies among individuals (exchangeability of sampling units). The latent classes represent subgroups of people who, relative to the overall population, are more homogeneous with respect to a specified statistical model (e.g., a measurement model). LVMMs have been used for a number of purposes, including, for example, to identify groups of individuals who exhibit certain response behaviors (e.g., socially desirable responding [e.g., 34], test taking behaviors [e.g., 35]). They have also been used to identify groups of individuals with different symptom patterns and characteristics related to psychological conditions, such as anxiety sensitivity, panic disorder, and conduct disorder [e.g., [36][37][38][39], and have been proposed as a tool in the development of diagnostic classifications [e.g., 40]. In the context of test development, our interest lies in the use of LVMMs for the assessment of measurement invariance. Here, the focus is on measurement structures that include a continuous latent variable representing the construct of interest and latent classes (subgroups of individuals) that are defined by differences in the parameter estimates of the latent variable. If these differences occur between classes of individuals who are matched on the construct of interest, there is evidence that the measure lacks invariance.
Various LVMMs have been proposed for examining measurement invariance, including factor mixture models, Rasch and IRT mixture models, and extensions thereof. Factor mixture models combine factor analysis with latent class analysis by ''nesting'' the latent factor model within two or more latent classes [41][42][43]. In factor analysis, the measurement structure is assumed to hold across the population of interest. The addition of latent classes relaxes this assumption by allowing measurement model parameters (factor loadings, items thresholds or intercepts, and item residual variances) to vary across the classes. Similarly, in Rasch and IRT mixture models, the assumption of parameter invariance can be relaxed and population heterogeneity accommodated by allowing difficulty and discrimination parameters to differ across latent classes [29,30,44]. Based on these foundations, LVMMs can be used for the identification of invariant items in the context of test construction.

LVMM approach for item selection
The assessment of measurement invariance in the context of test construction comprises the following five sequential steps of identifying and removing noninvariant items while comparing the fit of resulting 1-and 2-latent class models (see Fig. 1). The approach can be described as follows (methodological details are presented in the expository analysis): Step 1: The first step pertains to the exchangeability of items, where the objective is to establish a theoretically defensible measurement structure of a candidate pool of items through the application of factor analysis methods in the full sample.
Step 2: The next step is to determine whether a sample is homogeneous or heterogeneous relative to the measurement structure. This is accomplished by fitting the model from Step 1 to the data in both 1-and 2-class LVMMs and comparing the fit of the models. If the fit of the 1-class model is superior, there is no evidence of sample heterogeneity with respect to the measurement structure, and the measurement invariance analyses can be stopped. If the 2-class model produces better fit, the next step is to identify the items that contribute to this heterogeneity (i.e., the items that are least invariant). 1 Step 3: DIF methods are applied to identify those items that lack measurement invariance across the latent classes. These items are then removed from the test or item set (unless there are other reasons for retaining them).
Step 4: The reduced test or item set is once again fit to the data in both 1-and 2-class LVMMs, and the fit of the models is compared. If the 1-class model produces better fit, the analyses come to an end. If the 2-class model produces better fit, an iterative process begins.
Step 5: Steps 3 and 4 are repeated until the most invariant items are identified and the 1-class model produces superior fit compared with the 2-class model (i.e., the sample is no longer heterogeneous with respect to the measurement model).
Step 2 Objective: Determine whether the sample is homogeneous or heterogeneous with respect to the specified measurement structure.
Method: Compare the fit of a 1-class model (assuming homogeneity) and a 2class model (allowing heterogeneity).

Assess model fit
Stop (no evidence of DIF due to population heterogeneity) Step 3 Objective: Identify items that are not invariant in the sample.
Method: Use DIF analysis methods to identify the least invariant items.
Step 4 Objective: Determine whether the sample is homogeneous or heterogeneous with respect to the specified measurement structure excluding those items identified in Step 3 as lacking invariance.
Method: Compare the fit of a 1-class model (assuming homogeneity) and a 2-class model (allowing heterogeneity).

Stop
(no evidence of DIF due to population heterogeneity) Step 5 Repeat Steps 3-4 until there is no more evidence of heterogeneity with respect to the remaining items (i.e., all items are invariant) Step  It is important to note that the above steps focus on the identification of items for which the measurement model parameters are most likely to be invariant. In the context of test construction, this information supplements other psychometric and substantive considerations to guide item selection.

Demonstration of LVMMs in test construction
The following expository analysis is provided as an example of how LVMMs can be used to identify items that are invariant in the population. The five-step approach was applied to an existing item bank (39 items) measuring daily activities (see Table 1), which is one of the item banks of the CAT-5D-QOL [45,46]. The items address overall ability to perform usual activities, difficulty or limitations in specific aspects of daily living (e.g., grooming, working, and socializing) and the need for assistance in daily living. Five-point response scales were used for 37 items, while a 4-point and a 3-point response scale were used for one item each. The data are from a sample of 1666 adults living in the province of British Columbia, Canada. Approximately 20% were patients at a rheumatology clinic, 20% were drawn from a waiting list for knee or hip replacement surgery, and the remainder comprised a random stratified community sample. Further information about this sample is published elsewhere [8].

Statistical methods
The statistical methods of relevance to this expository analysis include those pertaining to factor analysis, IRT, LVMMs (using the MPLUS v7.4 software [47]), and DIF analysis (using SPSS v24 [48]).
For step 1, confirmatory and exploratory factor analyses were conducted using mean and variance weighted least squares estimation (WLSMV) to determine if the items could be treated as unidimensional. Dimensionality was assessed by evaluating the ratio of the first and second eigen values. Although the eigen value greater than 1 ruleof-thumb is widely used, it is important to note that it tends to result in overestimation of the number of latent factors [49,50]. Based on a simulation study of conventional guidelines, Slocum-Gori and Zumbo recommend that a ratio of the first and second eigen values greater than 3 is indicative of a unidimensional structure when samples are relatively large (of 800 or more) and communality relatively high (the simulation was based on a communality of 0.90) [51]. Fit of the measurement model was assessed using the Comparative Fit Index (CFI) and Root Mean Square Error of Approximation (RMSEA). Values above 0.90 for the CFI and below 0.08 for the RMSEA indicate acceptable fit [52]. Next, a 2-parameter graded response IRT model using full information maximum likelihood was applied [53].
For step 2, LVMMs of the graded response IRT model from step 1 were applied specifying 1 and 2 latent classes, following model specifications described by Sawatzky et al. [7]. Relative fit of the 1-and 2-class LVMMs was assessed based on the Bayesian Information Criterion (BIC). Lower BIC values indicate better fit [54]. In addition, the percentage of statistically significant bivariate residuals (based on a v 2 test of each item pair adjusted for multiple comparisons) was considered, as was the entropy for the 2-class model. Statistically significant bivariate residuals indicate violations of the assumption of local (item) independence [12,13], while entropy measures certainty in class membership (values above 0.8 are considered indicative of high confidence in assignment) [55]. The assumed standard normal distributions of the latent factors were examined by describing the distributions of the predicted latent factor scores. Multinomial logistic regression based on pseudo-class draws [56,57] was used to determine the extent to which latent classes differed with respect to sex, age, having a medical condition (yes/no), using two or more medications (yes/no), hospitalization during the previous year (yes/no), and self-reported health status (ranging from 1 = excellent to 5 = very poor).
For step 3, any of the aforementioned DIF methods could be used to examine measurement invariance of item parameters across the latent classes. For this expository analysis, the ordinal logistical regression (OLR) approach was used [22,58]. This was accomplished by comparing two nested models where each item was regressed on (i) the latent factor score (based on the LVMM) and (ii) the factor score plus the latent class membership (to test for uniform DIF) and the latent class by latent factor interaction (to test for nonuniform DIF). The magnitude of DIF was evaluated based on the difference in the Nagelkerke R 2 (i.e., DR 2 ), comparing models (i) to (ii), for each item. A DR 2 below .035 is indicative of ''negligible'' DIF, a DR 2 between .035 and .070 indicates ''moderate'' DIF, and a DR 2 above .070 indicates ''large'' DIF [59]. Based on these criteria, the least invariant items were identified as those that had a DR 2 greater than .035.
For step 4, the 2-parameter graded response IRT model from Step 1, minus the least invariant items from Step 3, was refit to the data in both 1-and 2-class LVMMs. Model fit was assessed as in Step 2. In step 5, steps 3 and 4 were repeated several times, each time removing the items that exceeded the DR 2 cut-off.

Results
Information about the fit of the LVMMs is reported in Table 2. The following is a summary of the results pertaining to each step of the LVMM approach.
Step 1: The EFA results produced a ratio of the first and second eigen values of 16.6, with the first four eigen values being 31.09, 1.87, 0.99, and 0.65, thereby providing support for unidimensionality. The singlefactor structure resulted in acceptable overall model fit Having the compelling evidence of a unidimensional structure, we proceeded with examining heterogeneity in the population as an alternative explanation for the remaining local dependence.
Step 2: The 2-class LVMM provided a better fit to the data compared with a 1-class graded response IRT model. The BIC for the 2-class LVMM was lower, and there was a notable reduction in the percentage of statistically significant bivariate residuals (see Table 2). The entropy for the 2-class LVMM was 0.84. The predicted latent factor scores of both models approximated the normal distribution (see Table 2). People in class 1 were more likely to be older, female, and have more health challenges (see Table 3). Because these results are suggestive of heterogeneity in the sample with respect to the measurement model, the next step was the identification of DIF items.
Step 3: OLR revealed that of the 39 items, 23 items had DR 2 values exceeding the recommended cut-off (see Table 1). These were removed from the model, and the resulting 16-item model was retested in Step 4.
Step 4: A comparison of the 1-and 2-class LVMMs of the 16 items indicated that the 2-class model once again had better fit (see Table 2). The two classes differed with respect to several demographic-and health-related variables (see Table 3).
Step 5: OLR (next iteration of step 3) was subsequently reapplied to the remaining 16 items based on the LVMM results from step 4. Five items had DR 2 values above the recommended cut-off and were removed. The BIC of the 1-class LVMM of the remaining nine items was lower than that of the 2-class LVMM (next iteration of step 4).
In addition, the 1-class LVMM of the remaining nine items resulted in substantially improved fit relative to the 1-class LVMMs of 16 and 39 items. These results suggest that the sample is relatively more homogeneous with respect to the unidimensional measurement structure of the nine items. Therefore, no further DIF analyses were conducted.
A factor analysis of the final selection of nine items provided compelling support for a unidimensional measurement structure (the two largest eigenvalues were 7.5 and 0.4) and similar overall model fit (RMSEA = 0.087; CFI = 0.99), and substantially improved local independence, with only one residual correlation above 0.1 (r = 0.11). The parameter estimates of the corresponding unidimensional graded response model are reported in Table 4. Finally, the predicted factors scores are strongly correlated with the factor scores (r = 0.83) based on a graded response model of the original 39 items.

Discussion
Factor analysis methods are widely used to guide item selection in test construction. The DLD framework provides a theoretical basis for examining measurement invariance as an additionally important consideration. However, the characteristics of individuals that may affect measurement invariance are often not known a priori. For example, DIF analyses could have been conducted based on the subsamples in the data used for our expository analysis (rheumatology patients, hip and knee patients, and a community sample). While this approach might also lead to the detection of DIF items, and would be appropriate if the goal were to establish lack of DIF relative to these groups specifically, such a manifest groups approach would fail to detect DIF with respect to the more complex set of characteristics that describe the latent classes found in our data (Table 3). Although others have advocated for consideration of measurement invariance in test construction, this is the first study to describe and demonstrate how LVMMs can be used to identify invariant items to inform item selection in the development of PROMs. In our expository analysis, we used LVMMs to identify a subset of items that were most invariant within the   [47] software where the cumulative probability Q ij of an item i response at or above category j is expressed as follows: P ij Y ! jjh ð Þ¼ expðÀsijþkihÞ 1þexpðÀsijþkihÞ ; where s ij denotes the thresholds between the categories of item i, and k i denotes the factor loading for item i. The following transformation can be applied to convert the Mplus thresholds (s) and factor loadings (k) into the difficulty (b) and discrimination (a) parameters of the graded response model: b ij ¼ sij ki ; and a i ¼ k i sample. We specifically demonstrate how LVMMs can complement IRT analysis to examine and address the assumption of local independence underlying latent variable measurement theory. As aptly described in the DLD framework, local independence requires exchangeability of items (dimensionality) as well as of sampling units (invariance) [10,11]. However, despite the apparent utility of LVMMs to inform item selection based on the exchangeability of sampling units, these models do not always provide conclusive results. Accordingly, it is widely acknowledged that item selection should not be exclusively driven by these statistical considerations. Both item content and theoretical considerations need to be taken into account [2,16]. For example, in our analysis, most of the retained items address difficulty related to basic activities of daily living at more severe levels of disability (e.g., dressing, bathing, toilet etc.), whereas items pertaining to social activities and leisure activities were not retained. Consequently, content validity, and therefore construct validity, may have been affected by the removal of items. Further validation research is needed to determine the extent to which the remaining items fully reflect the intended construct of interest. The estimated correlation of the factor scores based on the original 39 items and the remaining 9 items is quite large (i.e., 0.83), providing support for concurrent validity. However, the correlation is not perfect. Depending on the purpose of measurement and the conceptualization of daily activities, different decisions about the retention of items, or the option of revising items to be more invariant, may be made.
There are several important areas for further methodological development regarding the use of LVMM for the identification of least invariant items. First, simulation studies are recommended to determine the optimal sequential process for removing items that lack measurement invariance. In the example analysis, all items that met a particular criterion for invariance were removed before refitting the LVMM. The rationale is to remove those items that lack invariance with respect to particular latent classes, prior to estimating new latent class parameters. Another option is to remove one item at a time, such that the latent class parameters are reestimated every time an item is removed. Second, as is common in factor analysis, IRT, and Rasch analysis, the LVMMs in our analysis assume normally distributed latent factors (although this is not a necessary condition for latent variable modeling). LVMMs may detect artefactual latent classes when this assumption is not met [60]. In addition, although the widely used graded response model was used in our analysis, other IRT and Rasch models could be utilized. Simulation studies are needed to determine the extent to which mis-specification of latent factor distributions and different specifications of latent variable measurement structures may affect LVMM results in the context of test construction. Third, simulation studies are recommended for determining the potential implications of multidimensionality with respect to identification of DIF and the use of LVMMs, for ''[a]lthough the presence of DIF automatically implies the presence of a secondary dimension, the presence of a secondary dimension does not automatically imply the presence of DIF'' [61, p. 108]. While our expository analyses exemplifies the application of LVMM to a unidimensional set of items, it is important to consider the challenges of distinguishing multidimensional constructs from DIF, especially when there is evidence of ''nuisance dimensions'', which could be manifestations of DIF [24,29,62]. Fourth, it is not known to what extent DIF analyses may be influenced by inconclusive class membership (i.e., entropy values less than 1). In addition, other DIF detection methods and effect size criteria for identifying invariant items could be utilized [6]. The OLR DIF detection approach utilized in the expository analysis was chosen because it is relatively straightforward to conduct and has a strong track record in psychometric analyses of PROMs. Although extensive research comparing different DIF detection methods has been conducted [e.g., 6], previous studies have not focused on the application of these methods in relation to LVMMs. Simulation studies and primary research can be used to develop specific recommendations for implementing LVMMs across a range of data-analytic conditions.

Conclusion
We propose a theoretical foundation and general approach for using LVMMs in test construction with the intent to stimulate further methodological development for heterogeneous populations. An important goal in the measurement of PROMs is to ensure that the perspectives of patients are represented in an unbiased manner. The DLD framework and use of LVMMs have significant potential for advancing theoretical developments and research on item selection for test construction of PROMs in heterogeneous populations.

Compliance with ethical standards
Conflicts of interest The authors declare that they have no conflict of interest.
Ethical approval All procedures performed in this study were in accordance with the 1964 Declaration of Helsinki and its later amendments. All participants were provided a consent form together with the survey questionnaire and were informed that their consent was implied if they completed the questionnaires. The study was approved by the University of British Columbia Behavioural Research Ethics Board (approval # B00-0500).
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://crea tivecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.