Assessing the Screening of Depression: What does the PHQ measure and how should clinicians conceptualize findings?

Cole recently published yet another amazing paper, expanding the ability to use contemporary personality assessment measures and link their use with other clinical practices. In this case, they examined how common depression screens (i.e., PHQ-2/9) used in medical settings relate to the more comprehensive MMPI-3. So useful, building directly on an article by David McCord on the MMPI-2-RF. Linked below are the tables which are directly related to screening effectiveness. Psychologists, as well as other health/mental health professionals, should see elevated scores on the PHQ (i.e., those exceeding clinical cut score recommendations; PHQ-9 of 9, PHQ-2 of 3) as being most associated with general internalizing pathology, and feelings of self-doubt, helplessness, and demoralization. Likewise, risk of suicidality is also significantly elevated at these cut scores and should be evaluated.

NEW PAPER: Factors influencing student intention to conduct psych assessments in their careers

More excellent work from TTU Clinical Psych student Becca Bergquist was just accepted to the Journal of Clinical Psychology! Long story short, we were curious if our national sample of clinical/counseling students could give us a better sense of the factors influencing if they plan to incorporate assessment into their careers (as a major component of psychologist professional identity).

Developing long-term professional practice goals is a critical step not only for trainees, but also for designing effective educational approaches to guide competent psychological assessment practice. Thus, understanding factors that shape decisions to engage in this domain of competence are needed, and must include evaluations of self-reported and actual competency as distinct constructs.

Survey invitations were sent to training director(s) (TD) at APA-accredited HSP programs that include substantive training in Clinical or Counseling psychology (including those listed as having combined-type programs). Programs were considered for inclusion if they were located within the United States and listed as accredited on the APA website in January of 2019 (APA, 2018). Our final sample (n = 414; see Table 1) of trainee respondents (PhD = 64%; PsyD = 35.3%) were on average of 27.8 years old (SD = 3.5) and identified as female (79.5%) and white (82.4%). Most trainees were enrolled in a Clinical training program (77.8%) rather than Counseling (17.6%) or a combined type (4.6%) program. This recruitment means that we are talking about PRE-COVID understandings of assessment, with no tele-assessment emphasis.

The findings from this study have four distinct and important themes which warrant additional consideration: (a) students’ intention to utilize assessments in their future careers is incrementally predicted by self-reported competence beyond program characteristics, respondent demographics, and career setting aspirations, (b) self-reported competency plays a larger role than performance-based competency when assessing trainees’ career intentions to involve assessment, (c) graduate training and practice experiences in assessment were insignificant predictors of trainees’ intentions after accounting for the other predictors within the model and, (d) self-reported and performance-based competence influences trainees’ perception of and engagement in training experiences.

Findings suggest that a focus on self-awareness and self-knowledge in competency development (Kaslow et al., 2018) would benefit from ensuring trainee perceptions of their competency align with benchmarked progression. Trainees with high assessment competence (both self-reported and performance-based) reported significantly more hands-on instrument use than their peers with lower assessment competencies. This pattern of findings suggests that efforts to foster assessment competence may be shaped by coursework and practicum training. Thus, efforts to increase exposure and training with assessments may result in greater competence and career engagement. While the aforementioned training components seem to be promising targets for increasing trainee assessment capability, implementing and evaluating these efforts cannot be done without modifying existing training frameworks. Indeed, research generally suggests that traditionally-defined assessment competence, which stems from knowledge and skills obtained during graduate coursework and clinical practicum, does not contribute meaningfully to perceptions of professional competence in practicing psychologists (Neimeyer et al., 2012b).

2022 AACN: The Personality Assessment Inventory (PAI) and TBI in active duty service members

This is another great project on personality assessment in active duty military personnel who underwent a neuropsych evaluation with Pat Armistead-Jehle. Also, the work of Tristan (Picture above; lab postbac) and Sarah McCracken (check the lab members webpage) were fantastic. I’ll summarize our work / findings.

Previous research on the PAI has examined response patterns in those with recent brain injuries. While some studies (Kennedy et al., 2015; Velikonja et al., 2010) have identified a four-group solution (low/non-symptomatic, High, Moderate, and Somatic Distress), others (Demakis et al., 2007) have suggested a three-group solution (high distress, manic/high energy, and externalizing groups). Moreover, research has not employed the entire span of PAI scales.  The current  study sought to address some of these gaps by using  LPA to examine response patterns on the PAI in active-duty military personnel with a remote history of brain injury (n = 384).

LPA were conducted using the Clinical, Treatment, and Interpersonal Scales because of their non-overlapping item content. The best-fitted model (based on AIC/BIC/SSA-BIC/Entropy & Parsimony/theory-based interpretation) was contrasted across PAI subscales, as well as neuropsychological testing, PVT, and demographic data. Results suggest a 2-class solution (non-symptomatic [59%] and high symptomatic [40%]), with no differentiation between subsets of symptoms (e.g., manic, externalizing, or somatic, per earlier studies). Scale differences reflected generally large effects, particularly for SOM, ANX, ARD, DEP, PAR, SCZ, and BOR (d = 1.07 – 2.50). High symptom groups also evidenced poorer neurocognitive testing and more frequent PVT failure. These findings suggest that previously identified response groups are not evident in active-duty military with a remote history of brain injury. Some tables below to show off some of the results of the poster (Click to download).

2022 MMPI Symposium

Texas Tech and the PATS lab really brought some awesome stuff this year! We had a number of projects presented by current grad and undergrad members, as well as having a big group attending. I can’t be more thrilled with the work of everyone. I’m going to touch on the projects we presented and add some pictures of us having a great time all around. Citations link to presentations for download.

Menton, W. H., Whitman, M. R., Tylicki, J., Morris, N.M. , Tarescavage, Ingram, P.B., A. M., Gervais, R. O., Wygant, D. B., Ben-Porath, Y.S. (2022, June). Predictive Utility of Experimental Higher-Order Validity Scales for the MMPI-3. Comprehensive presentation presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

An item-level approach to general over/under reporting looks to perform better than the content, theory-based focused (psych, somatic, cognitive) MMPI-3 over-reporting scales and the under-reporting scales.

Keen, M., Morris, N.M., & Ingram, P.B. (2022, June). Development and Initial Validation of the Scale of Scales (SOS) Over-Reporting Scales for the MMPI-3. Data blitz presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

We took a general approach to over-reporting, relying on the assumption that feigning attempts are less specific/more general than some models of symptom over-reporting (see Rogers & Bender). Scale level means (see Gaines et al., 2013) perform equal/better than item-level over-reporting scales of the MMPI-3. We’ve subsequently replicated this approach with a few clinical samples and found equal to better performance, including on several PVTs (Ingram et al., 2020 reanalysis of active duty personnel in a neuropsych evaluation sample)

Cole, B.P., & Ingram, P.B. (2022, June). Treatment prediction in an outpatient telehealth clinic: Relationships between MMPI-3 scores and positive-psychology intervention engagement and outcome. Data blitz presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

Pilot data demonstrated that the MMPI-3 can predict positive psych intervention outcomes and that the symptom scales are negatively associated with strength traits. This may suggest low score interpretations tied to strengths, although this approach ignores the orthogonal nature of mental health (see Keyes, 2002)

Peters, A.S., Morris, N.M. & Ingram, P.B. (2022, June). Classification Accuracy of the MMPI-3’s Eating Concerns (EAT) Scale using a Common Screening Measure as the Criterion. Poster presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

Ashlinn really rocked! As her first poster, she had to learn all the ROC/classification statistics. Just awesome. Findings suggest that the restricted item content will make prediction of any specific pattern of eating difficult at the T75 cut score (requiring 2 endorsements). Given this, moving the EAT scale to a critical scale with endorsed items listed at the end of the report may be wise. Next steps you ask? Ashlinn’s next year will be screening individuals in diagnostic groups to grow her samples and examine EAT for differentiation of those diagnostically screened groups.

Morris, N.M., Patterson, T.P. , Ingram, P.B., & Cole, B.P. (June, 2022). MMPI-3 and Gender: The moderating role of masculine identity on item endorsement. Blitz talk presented at the 2022 Annual MMPI Research Symposium, Minneapolis, MN.

Nicole’s focus on expanding contextual interpretation of the MMPI-3 really leads the diversity focus needed in assessment. Gender norms predict 4-13% variance beyond symptoms and sex for most (11/16) MMPI-3 internalizing scales. Our next steps are going to use LCA to look at clusters of gender norms (masculine and feminine) with MMPI-3 scale presentations.

And now for some pictures of all the lab fun!

Besides the lab fun, I also got a chance to catch up with my old Western Carolina advisor David McCord with his students. MMPI has one of the best communities and I love being part of it.

HiTop and the PAI: 2022 APS Presentations

I’m excited to head to my first APS conference to present some recent work by Sarah Hirsch and Megan Keen on the Personality Assessment Inventory (PAI) and efforts to map HiTop onto this instrument. We focused on military populations and used an EFA/CFA approach on distinct samples but examine how (and how well) these efforts apply to military service members.

We conducted a series of EFAs on a sample of active duty soldiers seen as part of a neuropsychological evaluation, then performed Goldberg’s (2006) Back-ackwards analysis to link observed factor structures in each EFA solution, up to the 7-factor model comprising the HiTop Sub-spectra. We were able to identify many of the Spectre-Sub-Spectra factors and the initial standardized loadings seemed to make good sense with the HiTop Model descriptions; however, diagnostic discrimination was not as clean as we may hoped and there were several unexpected correlations which were medium/large in magnitude, which suggest some inflated general inner correlation (reminding us of the CS7/CS8 correlation in the MMPI-2 prior to the creation of the restructured clinical (RC) scales).

Next up, we took a group of Veterans being evaluated as part of their intake process on an outpatient PTSD Clinical Team (PCT). We started with the initial EFA model, but failed to find good fit. Rather than follow EFA correlations as corrections, we evaluated from the bottom up on individual sub-spectra, trimming poorly fitted items/spectra. Our goal in taking these steps was to produce a replicable model with only strong, expected relationships, even if that means a model not fully congruent with HiTop. Avoiding dual loading indicators also ensures a more interpretable model since it maintains component independence.

You can download the posters by clicking here for the CFA or the EFA posters!

Interpretation of Positive Depression Screening on the PHQ

The PHQ-2/9 is one of, if not the most, used screening measure for depression. It is implemented in a standardized manner into treatment outcome research and patient-based care initiatives. However, interpreting what scores represent amongst the broad spectrum of internalizing spectrum of pathology is critical. Despite items representing ‘depression’ criteria (A1-A9; DSM-5), these experiences are not unique to depressed individuals. Evaluating screening with the MMPI-3 offers a way to examine interpretative meaning using a new, highly validated broadband measure. Following up to a paper under review by Nicole Morris, presented at the 2021 MMPI symposium, I was playing around with data visualization.

The role of positive screening on the PHQ-9 (cut-score 10) was most associated with self-doubt; a trend which wasn’t entirely consistent on the PHQ-2 (cut-score 3). While Self-Doubt (SFD; Navy line) maintained a major role, helplessness was the stand-out (HLP; Green Line). Different item content let to distinctive internalizing symptoms as the driving aspect of a positive screening.

PHQ-9
PHQ-2

Personality Assessment Inventory (PAI) Over-reporting Scale Effectiveness

Another first author pub by the most excellent Nicole Morris!

Nicole did some excellent work to build on the limited literature on PAI validity scales, evaluating their effectiveness in a military sample evaluated within a neuropsychology clinic. We used performance validity (PVT; MSVT, NVMSVT) to compare group differences. Limited work has been done prior to now on some of the PAI validity scales (see McCredie & Morey, 2018), so expanding this literature for one of the most popular and widely used personality measures (Ingram et al., 2019, 2022; Mihura et al., 2017; Wright et al., 2017) is critical. I’ve reproduced the classification accuracy statistics below for ease. The entire paper may be downloaded HERE.

Undergraduate Research: MMPI-3 scales are similar in-person and virtual

One of my fantastic undergraduates conducted a study using existing MMPI-3 study data (Morris et al., 2021; Reeves et al., 2022) to compare the effectiveness of the over-reporting scales across in person and virtual administrations. Given the guidelines put out about telehealth assessment (Corey & Ben-Porath, 2020) and the expanding research on general comparative effects of virtual psychological interventions, we expected that the scales would perform equally. Indeed, that was what we found. One implication of our findings is that future meta-analyses of the MMPI-3 validity scales will likely not need to consider this element of study design as a potential moderator for scale effectiveness.

Click HERE to download a copy of the study poster.

Webinar on Internship: Get the Materials here!

Today (in about 30 minutes from my writing this) I’m going to be presenting to Division 12 (Clinical Psychology)’s section for Students and Early Career Professions (formally called Section Ten). I’m super excited to help demystify the internship process and help applicants maximize their success and desired career trajectories. I also want to make sure the materials are available from the talk

Click the slide below to download a PowerPoint version of the talk

After the talk, I will also be updating this post to include a video of the talk. Stay tuned!