More Military Neuro Work: Examining the Memory Complaints Inventory (MCI)

This was another awesome project to do with the fantastic Dr. Pat Armistead-Jehle. One of the major shortcomings of the Green’s MCI is its lack of of work with other SVTs, despite the limited work to date cross validating the MCI with those measures (Armistead-Jehle & Shura, 2022). This study was intended to expanded that limited research, using the recently validated CBS scale from the PAI as the criterion (related, to see Shura et al, 2022 which I recently helped published). You can download a pdf copy of the article HERE.

Short version of findings from this new work is that the MCI has good evidence of classification accuracy using recommended cut-scores. We only used one SVT criterion and similar measures on other broadband measures (e.g., MMPI-3’s RBS scale) are needed to further this line of work. Below are the classification and effect metrics across each of the MCI scales.

New Article: The Personality Assessment Inventory (PAI) CBS and CB-SOS cognitive over-reporting scales in Veterans

Recently, a collaboration with the wonderful folks at the Salisbury Veteran Affairs MIRECC was published in The Clinical Neuropsychologist. In this article we examined the CBS and CB-SOS scales for the PAI to determine their detection of over-reported cognitive symptoms (Word Memory Test Criterion) and general psychopathology over-reporting (M-Fast Criterion) in a sample of post-deployment Veterans. This is an exciting article that continues to grow validity detection effectiveness and options on the PAI and the article also highlights some very important considerations for traditional conceptualizations of distinct over-reporting strategies (e.g., cognitive, somatic, and psychological distress patterns; Sweet et al., 2021).

Specifically, results highlight that scales designed to assess cognitive domains may not be distinct from other domains, in part because of the non-distinct item content used to generate the scales, even when using “boot strapped floor effect” approaches to scale generation (see Burchett & Bagby, click me). Such findings are supported by the overall classification rates contrasted between M-fast and WMT when contrasting 90% specificity value points (Table 5), as well as higher sensitivity. While it is possible that this pattern is sample dependent, this pattern is also evident on other instruments/cognitive over-reporting scales (e.g., Butcher et al., 2008). Thus, these findings highlight a specific instrument development need and have direct implications moving forward for how instrument embedded validity scales should be conceptualized.

From the discussion in this recently published Shura et al (2022) paper, “Revisions to testing measures that aim to expand cognitive overreporting assessment, and to focus on this domain of symptom response (Sherman et al., 2020), may benefit from increased emphasis on the development of cognitively focused items based on a priori, empirically based content. Explicit use of validity detection patterns (Rogers & Shuman, 2005) at early developmental phases (e.g. creating specific items that highlight symptom incongruence or symptom rarity; Rogers & Bender, 2018), rather than post-hoc identification of items that may not measure those constructs explicitly is warranted. Well-specified item-pool revision efforts specific to validity testing needs and standards (Sherman et al., 2020; Martin et al., 2015) may not only improve general and longstanding classification difficulties (i.e. low sensitivity), but the distinctiveness of symptom clusters. Even if the overlapping PVT/SVT performance does not resolve entirely as a function of shifted developmental priorities, placing an increased emphasis on validity content development at the test revision stage remains necessary. Broadband measures are widely and historically preferred because of their SVTs (Ben-Porath & Waller, 1992; Russo, 2018), as well as the broader growth in focus on SVT-related research (Sweet et al., 2021). Research on validity scales tends to use either PVT or SVT criterion as an outcome, but rarely within the same study. Given the potential for effective performance on the related (but not overlapping) constructs of PVTs and SVTs, the inclusion of distinct criterion measures that assess divergent
over-reporting symptom sets (somatic, cognitive, or psychological; Sweet et al., 2021) is also merited.”

CLICK TO DOWNLOAD THE ARTICLE

Personality Assessment Inventory (PAI) Cognitive Over-reporting Validity Detection

The lab is working extensively to expand the research on over-reporting detection on the PAI for Cognitive symptoms. The most recent paper is part of a new collaboration with the staff at the Hefner VA MIREC in Salisbury, North Carolina, including the fantastic Dr. Robert Shura (as well as other training staff there). It is the latest in the continued collaboration with Dr. Pat Armistead-Jehle. We used a sample of Veterans to explore how well the CBS and CB-SOS scales worked in detecting invalid responding, based on performance on a PVT (Word Memory Test) and a SVT (M-FAST). This paper is in press at The Clinical Neuropsychologist using the following citation:

Shura, R., Ingram, P.B., Miskey, H.M., Martindale, S.L., Rowland, J.A., & Armistead-Jehle, P. (In Press). Validation of the Personality Assessment Inventory (PAI) Cognitive Bias (CBS) and Cognitive Bias Scale of Scales (CB-SOS) in a Post-Deployment Veteran Sample. The Clinical Neuropsychologist

Following exclusions for non-content responding, we used 371 Veterans assessed in a neuropsychology clinic, pass and fail group differences were significant with moderate effect sizes for all cognitive bias scales between the WMT-classified groups (d = .52 – .55), and large effect sizes between the M-FAST-classified groups (d = 1.27 – 1.45). AUC effect sizes were moderate across the WMT-classified groups (.650 – .676) and large across M-FAST-classified groups (.816 – .854). When specificity was set to .90, sensitivity was higher for M-FAST and the CBS performed the best (sensitivity = .42). Thus, the CBS and CB-SOS scales seem to better detect symptom invalidity than performance invalidity in Veterans using cutoff scores similar to those found in prior studies with non-Veterans.

Use of the Memory Complaints Inventory: Relationship to the PAI Cognitive Bias Scale (CBS) SVT Measure as a Criterion

I’m thrilled to have another paper out with the fantastic Dr. Pat Armistead-Jehle which focuses on validity assessment and scale effectiveness. It is still in press so no formal PDFS yet, but it will appear in Archives of Clinical Neuropsychology. As the first step to interpretation of substantive scale use, validation of validity efforts remains a critical step to effective clinical assessment. There remains a paucity of work on memory complaint SVTs relative to other domains of over-reporting (e.g., psychopathology; see Sweet et al., 2021), and the Memory Complaints Inventory (MCI) by Paul Green is the leading measure which does so independent of broadband personality assessment. However, no work had related performance on this instrument to the PAI, despite their frequent concurrent use in neuropsychological evaluations. I provide cut score effectiveness on the MCI using the CBS scale (the PAI’s cognitive SVT) below.

Assessing the Screening of Depression: What does the PHQ measure and how should clinicians conceptualize findings?

Cole recently published yet another amazing paper, expanding the ability to use contemporary personality assessment measures and link their use with other clinical practices. In this case, they examined how common depression screens (i.e., PHQ-2/9) used in medical settings relate to the more comprehensive MMPI-3. So useful, building directly on an article by David McCord on the MMPI-2-RF. Linked below are the tables which are directly related to screening effectiveness. Psychologists, as well as other health/mental health professionals, should see elevated scores on the PHQ (i.e., those exceeding clinical cut score recommendations; PHQ-9 of 9, PHQ-2 of 3) as being most associated with general internalizing pathology, and feelings of self-doubt, helplessness, and demoralization. Likewise, risk of suicidality is also significantly elevated at these cut scores and should be evaluated.

NEW PAPER: Factors influencing student intention to conduct psych assessments in their careers

More excellent work from TTU Clinical Psych student Becca Bergquist was just accepted to the Journal of Clinical Psychology! Long story short, we were curious if our national sample of clinical/counseling students could give us a better sense of the factors influencing if they plan to incorporate assessment into their careers (as a major component of psychologist professional identity).

Developing long-term professional practice goals is a critical step not only for trainees, but also for designing effective educational approaches to guide competent psychological assessment practice. Thus, understanding factors that shape decisions to engage in this domain of competence are needed, and must include evaluations of self-reported and actual competency as distinct constructs.

Survey invitations were sent to training director(s) (TD) at APA-accredited HSP programs that include substantive training in Clinical or Counseling psychology (including those listed as having combined-type programs). Programs were considered for inclusion if they were located within the United States and listed as accredited on the APA website in January of 2019 (APA, 2018). Our final sample (n = 414; see Table 1) of trainee respondents (PhD = 64%; PsyD = 35.3%) were on average of 27.8 years old (SD = 3.5) and identified as female (79.5%) and white (82.4%). Most trainees were enrolled in a Clinical training program (77.8%) rather than Counseling (17.6%) or a combined type (4.6%) program. This recruitment means that we are talking about PRE-COVID understandings of assessment, with no tele-assessment emphasis.

The findings from this study have four distinct and important themes which warrant additional consideration: (a) students’ intention to utilize assessments in their future careers is incrementally predicted by self-reported competence beyond program characteristics, respondent demographics, and career setting aspirations, (b) self-reported competency plays a larger role than performance-based competency when assessing trainees’ career intentions to involve assessment, (c) graduate training and practice experiences in assessment were insignificant predictors of trainees’ intentions after accounting for the other predictors within the model and, (d) self-reported and performance-based competence influences trainees’ perception of and engagement in training experiences.

Findings suggest that a focus on self-awareness and self-knowledge in competency development (Kaslow et al., 2018) would benefit from ensuring trainee perceptions of their competency align with benchmarked progression. Trainees with high assessment competence (both self-reported and performance-based) reported significantly more hands-on instrument use than their peers with lower assessment competencies. This pattern of findings suggests that efforts to foster assessment competence may be shaped by coursework and practicum training. Thus, efforts to increase exposure and training with assessments may result in greater competence and career engagement. While the aforementioned training components seem to be promising targets for increasing trainee assessment capability, implementing and evaluating these efforts cannot be done without modifying existing training frameworks. Indeed, research generally suggests that traditionally-defined assessment competence, which stems from knowledge and skills obtained during graduate coursework and clinical practicum, does not contribute meaningfully to perceptions of professional competence in practicing psychologists (Neimeyer et al., 2012b).

2022 AACN: The Personality Assessment Inventory (PAI) and TBI in active duty service members

This is another great project on personality assessment in active duty military personnel who underwent a neuropsych evaluation with Pat Armistead-Jehle. Also, the work of Tristan (Picture above; lab postbac) and Sarah McCracken (check the lab members webpage) were fantastic. I’ll summarize our work / findings.

Previous research on the PAI has examined response patterns in those with recent brain injuries. While some studies (Kennedy et al., 2015; Velikonja et al., 2010) have identified a four-group solution (low/non-symptomatic, High, Moderate, and Somatic Distress), others (Demakis et al., 2007) have suggested a three-group solution (high distress, manic/high energy, and externalizing groups). Moreover, research has not employed the entire span of PAI scales.  The current  study sought to address some of these gaps by using  LPA to examine response patterns on the PAI in active-duty military personnel with a remote history of brain injury (n = 384).

LPA were conducted using the Clinical, Treatment, and Interpersonal Scales because of their non-overlapping item content. The best-fitted model (based on AIC/BIC/SSA-BIC/Entropy & Parsimony/theory-based interpretation) was contrasted across PAI subscales, as well as neuropsychological testing, PVT, and demographic data. Results suggest a 2-class solution (non-symptomatic [59%] and high symptomatic [40%]), with no differentiation between subsets of symptoms (e.g., manic, externalizing, or somatic, per earlier studies). Scale differences reflected generally large effects, particularly for SOM, ANX, ARD, DEP, PAR, SCZ, and BOR (d = 1.07 – 2.50). High symptom groups also evidenced poorer neurocognitive testing and more frequent PVT failure. These findings suggest that previously identified response groups are not evident in active-duty military with a remote history of brain injury. Some tables below to show off some of the results of the poster (Click to download).

2022 MMPI Symposium

Texas Tech and the PATS lab really brought some awesome stuff this year! We had a number of projects presented by current grad and undergrad members, as well as having a big group attending. I can’t be more thrilled with the work of everyone. I’m going to touch on the projects we presented and add some pictures of us having a great time all around. Citations link to presentations for download.

Menton, W. H., Whitman, M. R., Tylicki, J., Morris, N.M. , Tarescavage, Ingram, P.B., A. M., Gervais, R. O., Wygant, D. B., Ben-Porath, Y.S. (2022, June). Predictive Utility of Experimental Higher-Order Validity Scales for the MMPI-3. Comprehensive presentation presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

An item-level approach to general over/under reporting looks to perform better than the content, theory-based focused (psych, somatic, cognitive) MMPI-3 over-reporting scales and the under-reporting scales.

Keen, M., Morris, N.M., & Ingram, P.B. (2022, June). Development and Initial Validation of the Scale of Scales (SOS) Over-Reporting Scales for the MMPI-3. Data blitz presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

We took a general approach to over-reporting, relying on the assumption that feigning attempts are less specific/more general than some models of symptom over-reporting (see Rogers & Bender). Scale level means (see Gaines et al., 2013) perform equal/better than item-level over-reporting scales of the MMPI-3. We’ve subsequently replicated this approach with a few clinical samples and found equal to better performance, including on several PVTs (Ingram et al., 2020 reanalysis of active duty personnel in a neuropsych evaluation sample)

Cole, B.P., & Ingram, P.B. (2022, June). Treatment prediction in an outpatient telehealth clinic: Relationships between MMPI-3 scores and positive-psychology intervention engagement and outcome. Data blitz presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

Pilot data demonstrated that the MMPI-3 can predict positive psych intervention outcomes and that the symptom scales are negatively associated with strength traits. This may suggest low score interpretations tied to strengths, although this approach ignores the orthogonal nature of mental health (see Keyes, 2002)

Peters, A.S., Morris, N.M. & Ingram, P.B. (2022, June). Classification Accuracy of the MMPI-3’s Eating Concerns (EAT) Scale using a Common Screening Measure as the Criterion. Poster presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

Ashlinn really rocked! As her first poster, she had to learn all the ROC/classification statistics. Just awesome. Findings suggest that the restricted item content will make prediction of any specific pattern of eating difficult at the T75 cut score (requiring 2 endorsements). Given this, moving the EAT scale to a critical scale with endorsed items listed at the end of the report may be wise. Next steps you ask? Ashlinn’s next year will be screening individuals in diagnostic groups to grow her samples and examine EAT for differentiation of those diagnostically screened groups.

Morris, N.M., Patterson, T.P. , Ingram, P.B., & Cole, B.P. (June, 2022). MMPI-3 and Gender: The moderating role of masculine identity on item endorsement. Blitz talk presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

Nicole’s focus on expanding contextual interpretation of the MMPI-3 really leads the diversity focus needed in assessment. Gender norms predict 4-13% variance beyond symptoms and sex for most (11/16) MMPI-3 internalizing scales. Our next steps are going to use LCA to look at clusters of gender norms (masculine and feminine) with MMPI-3 scale presentations.

And now for some pictures of all the lab fun!

Besides the lab fun, I also got a chance to catch up with my old Western Carolina advisor David McCord with his students. MMPI has one of the best communities and I love being part of it.

HiTop and the PAI: 2022 APS Presentations

I’m excited to head to my first APS conference to present some recent work by Sarah Hirsch and Megan Keen on the Personality Assessment Inventory (PAI) and efforts to map HiTop onto this instrument. We focused on military populations and used an EFA/CFA approach on distinct samples but examine how (and how well) these efforts apply to military service members.

We conducted a series of EFAs on a sample of active duty soldiers seen as part of a neuropsychological evaluation, then performed Goldberg’s (2006) Back-ackwards analysis to link observed factor structures in each EFA solution, up to the 7-factor model comprising the HiTop Sub-spectra. We were able to identify many of the Spectre-Sub-Spectra factors and the initial standardized loadings seemed to make good sense with the HiTop Model descriptions; however, diagnostic discrimination was not as clean as we may hoped and there were several unexpected correlations which were medium/large in magnitude, which suggest some inflated general inner correlation (reminding us of the CS7/CS8 correlation in the MMPI-2 prior to the creation of the restructured clinical (RC) scales).

Next up, we took a group of Veterans being evaluated as part of their intake process on an outpatient PTSD Clinical Team (PCT). We started with the initial EFA model, but failed to find good fit. Rather than follow EFA correlations as corrections, we evaluated from the bottom up on individual sub-spectra, trimming poorly fitted items/spectra. Our goal in taking these steps was to produce a replicable model with only strong, expected relationships, even if that means a model not fully congruent with HiTop. Avoiding dual loading indicators also ensures a more interpretable model since it maintains component independence.

You can download the posters by clicking here for the CFA or the EFA posters!

Interpretation of Positive Depression Screening on the PHQ

The PHQ-2/9 is one of, if not the most, used screening measure for depression. It is implemented in a standardized manner into treatment outcome research and patient-based care initiatives. However, interpreting what scores represent amongst the broad spectrum of internalizing spectrum of pathology is critical. Despite items representing ‘depression’ criteria (A1-A9; DSM-5), these experiences are not unique to depressed individuals. Evaluating screening with the MMPI-3 offers a way to examine interpretative meaning using a new, highly validated broadband measure. Following up to a paper under review by Nicole Morris, presented at the 2021 MMPI symposium, I was playing around with data visualization.

The role of positive screening on the PHQ-9 (cut-score 10) was most associated with self-doubt; a trend which wasn’t entirely consistent on the PHQ-2 (cut-score 3). While Self-Doubt (SFD; Navy line) maintained a major role, helplessness was the stand-out (HLP; Green Line). Different item content let to distinctive internalizing symptoms as the driving aspect of a positive screening.

PHQ-9
PHQ-2