MMPI-3 Over-reporting Scales: A Simulation Study of PTSD, mild TBI, and co-morbid Presentation

New Paper Alert, with not 1 but THREE of my advisees!

Download the paper HERE

Study Context The MMPI-3 is the latest revision in the line of the MMPI family of instruments, and includes updated norms and scale revisions. Included within the revisions on the MMPI-3 are changes to several of the well-validated MMPI-2-RF over-reporting scales. Three scales include new or reworked items, in addition to the renorming process. In this study (just accepted for publication in The Clinical Neuropsychologist) we examine how effective they were in a simulation, symptom coached design.

We picked a four condition design PTSD, mTBI, comorbid PTSD+mTBI since validity scales on the MMPI are designed to detect different symptom sets of invalid responding (e.g., infrequent psychopathology on F and Fp or infrequent somatic/neurological concerns on Fs, FBS, or RBS). PTSD offers a largely internalizing pathology symptom-set while TBI is largely somatic/cognitively focused. Few studies have evaluated comorbid conditions in validity scale feigning, and symptom sets have previously moderated scale effectiveness (both in simulation designs and in meta-analytic reviews). Given the high frequency of PTSD and mTBI overlap in military/veteran samples, this provided a great context for us to examine the MMPI-3’s scale utility. We coached participants on symptoms via a brief verbal description they and a written symptom description derived from Wikipedia on each condition.

Results Across the four conditions, the scales had effect sizes similar to those in other studies on symptom validity test effectiveness (e.g., other over-reporting scales on measures like the MMPI-2, MMPI-2-RF, or PAI) compared to control (d ~ 1.0 to 1.5) but negligible effects between diagnostic conditions. Our effects are different than the other simulation study out there (Whitman et al., 2021); however, ours are closer to what would expect. In our original paper we listed both the MMPI-2-RF and the MMPI-3, as well as incremental analyses but our final report is only the MMPI-3. I’ve provided both analyses below. Results are similar across instruments, which isn’t surprising given the correlation between the scales. In general, using the MMPI-3 over-reporting scales at their recommended cut scores means that you can be confident about those invalidating as being most likely exaggerating or misrepresenting symptoms but you may miss many others who are misrepresenting their symptoms.

We also calculated sensitivity and specificity across a variety of scale cut scores and, in general, scale performance was consistent with past work on broadband validity scales (high specificity, low sensitivity, and mean scores below recommended cut values).

The lack of effect between diagnoses of dissimilar symptom sets (e.g., somatic/cognitive versus psychopathology) was unexpected given past study on moderation. Likewise, FBS was distinct in its performance relative to the other scales – people elevated less on it and rarely invalidated it. This is curious since its designed for head injury litigants. Our FBS findings may reflect the simulation study design; however, since it is the only scale with that pattern of results, further research on a reason why this may have occurred is also warranted. Replication with military and veteran samples given the high relevance to referral concerns is also needed.

Service Era – Does it Matter for Assessment?

New Paper Alert!

Service era is a huge part of how military/Veterans identify themselves, and it varies during their wartime and homecoming experiences. When it comes to psychological assessment, the question is do these variations in experience become important considerations to ensure culturally competent and responsible assessment practices. This has been investigated a little, but the results have been interpreted as contradictory due to the overlap in (and inherent revisions between) the instruments compared.

A brief history: Glenn et al (2002) started asking this question with the MMPI-2. They compared response endorsement in a sample (Vietnam v Gulf 1) of those receiving PTSD Clinical Team (PCT) outpatient care and concluded that the wartime experience was different and important. Conversely, Ingram et al (2020) used the MMPI-2-RF in a nationally drawn PCT sample & found no differences (Vietnam v Gulf 1+2). I found Glenn’s differences may be a function of measure error & scale quality given the scale changes in the MMPI-2-RF. Why did I combine Gulf 1+2 given that they are considered different ears by the VA, you may ask? Because of how the data was gathered, service era was classified how it is reported within the electronic medical record system. A definite shortcoming when looking at eras. So studies on service era assessment have: (i) excluded Post-9/11 because of study age (Glenn) and (ii) Combined Gulf 1 and Post-9/11 (Gulf 2) into a single sample, despite the substantial variation in service experience

They have also only focused on the MMPI (2/2-RF). While popular, the PAI is equally widely used (Ingram et al., in press; Wright et al., 2017) and doesn’t have the same problem of scale version revisions as a potential explanation for findings (see Ingram et al, 2020) So what did we do about this shortage in the literature? We sampled Veterans from PTSD Treatment Teams (PCTs) and compared Vets from all three eras (Vietnam, Gulf, and Post-9/11) on the PAI, after controlling for Gender and Combat Exposure Severity. And what did we find?

These results are a (non-comprehensive) sample of the scales analyzed, differences interpreted were statistically important – and also clinically meaningful (i.e., greater than a medium effect / 5T points). We didn’t have item level data so couldn’t evaluate some aspects in question. Its important to note the high frequency of mean scores at or above T70, which is the PAI cut-score for clinical severity – a frequency more pronounced in Vietnam/Post-9/11.

Emma (PATS lab undergrad lab RA) presented on the MMPI-3’s EAT scale at her first national conference!

Emma did such a fantastic job during her presentation. The blitz talk was 3 minutes and she was presenting to the most knowledgeable group of individuals about psychological assessment and the MMPI specifically that I can imagine being assembled. This high quality of a talk is evidence of not only how great a student Emma is, but also how great of a mentor she had with Nicole Morris. Great job to both of them.

Not only am I excited about how great her talk was, but it also got me even more excited for July when we are going to work on putting together some papers based on this project. Want to see a sneak peak of the first paper – an expanded interpretation of the MMPI-3 EAT scale in college students? Check out her presentation below (Click to download the PowerPoint).

A study of the APPIC Match

Click here to download the pre-Print

Two years ago I started working with Dr. Adam Schmidt on another training related study, just accepted for publication in the Journal of Clinical Psychology. This time, we were curious about factors leading to successful internship application and match at various sites. There has been some work on this in the past (of which Ginkel et al., 2010 is the most recent), but research on training in health service psychology is limited. Studies on internship, for instance, frequently have examined lumped together sites (VA, Medical Center, College Counseling Centers) despite those sites being extremely distinct in their needs and goals of treatment. As such, we examined what Training Directors value (e.g., believe lead to better outcomes) during interview offer stage, and in the applicant ranking stage. We compared across site types defined by APPIC using 186 (~30%) of training directors. Below we have the pre and post-interview criteria.

Here are a few major stand out standout points to us:

(1) publications are valued less than conference presentations (WHAT?) for interview, and research is valued minimally in generally perhaps reflecting a research practice divide. A low value on research is also associated with less emphasis on EST, which has some unique implications for the professional development of future psychologists (see APA, 2006).

(2) differences in criteria are frequently on thing which take longer to become involved with (research output, assessment given the year in program for intellectual/personality coursework, etc.), meaning that for maximum flexibility in training those things must be started earlier. Said another way, it will be difficult to shift towards an internship that values those later (such as an AMC) while it would be easier to shift away. This has implications for program priorities in the time of offerings and progression.

(3) At a ranking determination, people value the in person interview performance A LOT. Even attendance of an in person interview varies between sites. This is strange given that in person interviews are notoriously bad predictors of subsequent work behaviors. With the virtual interviews this last year during COVID-19, this begs for some interesting and important follow up studies. The potential impact on trainees is huge.

We also asked the question “what is fit” and used qualitative methods to identify themes, and see how they differed between sites and what intersections in listed themes there were. So what is fit? Training directors tend to identify three patterns of characteristics, and how these differ should direct trainees about what they should emphasize, and when.

Copyright APA 2021

Graduation Weekend for the PATS lab

Chris Estes, Brittney Golden [Grad Student], Brittany Leva, Dani Taylor, Nicole Morris [Grad Student], Me (Top Row)
Tristan Herring, Mia Chu, Kassidy Wilson, and Will Derrick (Bottom Row)

WOW! I can’t believe so many of the awesome undergrads are leaving the lab all at once (Kassidy, Brittany, Dani, and Will.. and Liz Morger left us in December). We had such a huge group for the last two years and they all helped the lab feel like a little family. Now, suddenly, HALF are gone this year! As sad as it is to see them go, I’m excited to see what they do next and can’t wait to hear about all the successes to come. It was also super awesome to get to meet the families of the graduating seniors. PATS alumni will be spread all around the country – Watch out!

A few members couldn’t make it to the cookout this year sadly, and they were missed. For those starting in the lab next year, know that we will definitely do this again!

Incoming PATS lab members!

I’m thrilled to have Megan (left) and Bryce (right) joining the PATS lab here at Texas Tech this fall. Megan is interested in assessment of psychopathology and will be focusing on projects related to the PAI and MMPI, and the implementation of contemporary models of diagnoses. Bryce’s passions revolve around military and veteran mental health service and will be helping with projects involving those individuals. I can’t wait to see them in Lubbock, and to start working with them on the projects that match their interests.

A related thought: It was such a remarkable year for applicants and although I’m happy to have these two joining me, I also want to say that there were so many other amazing applicants who I was unable to interview or accept. I’m proud of the PATS lab, as well as each of those who sought out a place in our lab – even they didn’t ultimately wind up with us.

Congrats to Nicole Morris: A research grant story

The Texas Tech Department of Psychological Sciences hosted the first of its now annual, competitive grant program and each proposal underwent an NIH-style panel review. Each proposal was judged on its feasibility, significance, innovation, and methodological approach. With a strong pool of applications, the department was unable to fund all such proposals. Nicole (@NicoleLemaste10 on the Twitterverse) was one of those who was funded and I’m proud of the proposal that they put together so let me just say….

Nicole’s project examines the effectiveness of the Minnesota Multiphasic Personality Inventory-3 (MMPI-3)’s new Eating Concerns (EAT) scale within sexual minority individuals. Below I have provided the three stated aims of their proposal:

Aim 1: Expand interpretive understanding of EAT score elevations with psychological risk factors for eating pathology (e.g., body image, intuitive eating, etc.) Hypothesis: EAT scale scores will demonstrate strong associations with criterion measures of body image and eating behaviors which are risk indicators for AN, BN, and BED. Analytic Plan. Using correlation and regression analyses, I will identify the associations between criterion measures of disordered eating and EAT scale score and individual EAT items.

Aim 2: Explore differential performance of EAT scale scores between the LGBTQIA+ community and heterosexual individuals. Hypothesis: EAT scores will be higher for those that are within the LGBTQIA+ community. Analytic Plan. I willcontrast descriptive characteristics between sexuality and gender minority groups. Between group comparisons will be conducted using t-test and ANOVA and appropriate post hoc testing and effect size calculations.

Aim 3: Investigate cut-score accuracy for identifying the presence of clinical eating pathology in LGBTQIA+ college students. Hypothesis: Alternative cut-scores will provide better classification accuracy for identifying eating pathology in LGBTQIA+ individuals. Analytic Plan. Using classification accuracy statistics (i.e., sensitivity, specificity, high rate, etc.) I will determine if the standard cut score is appropriate for the LGBTQIA+ community.

I’m looking forward to sharing more of Nicole’s work on EAT as we move forward.

Examining admission and formation outcomes for Catholic clergy applicants with the MMPI-2-RF: A Prospective Study

I’m thrilled to have a paper with Anthony Isacco and TTU’s newest faculty member Nick Borgogna accepted for publication in Psychological Assessment. We utilized a sample of catholic clergy who underwent a psychological evaluation to predict if those individuals were ultimately admitted by the Catholic Church to the formation training program, and if they completed that training (typically a 5+ year process). We utilized a series of relative risk estimates for each MMPI-2-RF scale at multiple cut scores for each outcome (admission, completion). We also compared mean scores between the groups to summarize overall patterns of group difference. This prospective evaluation offers support for the MMPI-2-RF in aiding these decisions as elevations on numerous scales were associated with increased risk of less desirable outcomes (e.g., non-admission or non-completion). Lower threshold for clinical scales are recommended.

I have created an APA-style pre-print for the paper that may be downloaded here. This paper is not the copy of record and may not exactly replicate the final, authoritative version of the article. Please do not copy or cite without authors’ permission. The final article will be available, upon publication, via its DOI: 10.1037/pas0001028.

© 2021, American Psychological Association.

Select results are presented and summarized below.

The USCCB guidelines (2015) for psychological evaluations to identify problems with “affective maturity” points our current research towards having a clear definition of what that might look like on the MMPI-2-RF when contrasting group means. Specifically, Emotional/Internalizing Dysfunction (EID) and Dysfunctional Negative Emotions (RC7) require special consideration. Consideration of these scales will likely require adaptations from traditional cut-scores to lowered thresholds. Our study provides support for use of these lower cut-scores in clergy applicants, consistent with existing literature on other public service personnel. While proximal outcomes (admission) are stronger than distant outcomes (formation completion), scales consistently demonstrate increased risk as scores elevated, particularly on the Internalizing pathology scales (e.g., EID, RC7, Self-Doubt [SFD]) and those measuring Somatic/Cognitive Concerns. A notable exception was for RC2, which did not offer the same predictive capacity. Clinicians conducting these evaluations may rely on the MMPI-2-RF for its prospective predictive utility with clergy. In concert with the improved psychometrics of this test/the MMPI-3 over and beyond that of the MMPI-2, a need for transition to modern testing instruments during clergy evaluations is recommended.