Congrats to Nicole Morris: A research grant story

The Texas Tech Department of Psychological Sciences hosted the first of its now annual, competitive grant program and each proposal underwent an NIH-style panel review. Each proposal was judged on its feasibility, significance, innovation, and methodological approach. With a strong pool of applications, the department was unable to fund all such proposals. Nicole (@NicoleLemaste10 on the Twitterverse) was one of those who was funded and I’m proud of the proposal that they put together so let me just say….

Nicole’s project examines the effectiveness of the Minnesota Multiphasic Personality Inventory-3 (MMPI-3)’s new Eating Concerns (EAT) scale within sexual minority individuals. Below I have provided the three stated aims of their proposal:

Aim 1: Expand interpretive understanding of EAT score elevations with psychological risk factors for eating pathology (e.g., body image, intuitive eating, etc.) Hypothesis: EAT scale scores will demonstrate strong associations with criterion measures of body image and eating behaviors which are risk indicators for AN, BN, and BED. Analytic Plan. Using correlation and regression analyses, I will identify the associations between criterion measures of disordered eating and EAT scale score and individual EAT items.

Aim 2: Explore differential performance of EAT scale scores between the LGBTQIA+ community and heterosexual individuals. Hypothesis: EAT scores will be higher for those that are within the LGBTQIA+ community. Analytic Plan. I willcontrast descriptive characteristics between sexuality and gender minority groups. Between group comparisons will be conducted using t-test and ANOVA and appropriate post hoc testing and effect size calculations.

Aim 3: Investigate cut-score accuracy for identifying the presence of clinical eating pathology in LGBTQIA+ college students. Hypothesis: Alternative cut-scores will provide better classification accuracy for identifying eating pathology in LGBTQIA+ individuals. Analytic Plan. Using classification accuracy statistics (i.e., sensitivity, specificity, high rate, etc.) I will determine if the standard cut score is appropriate for the LGBTQIA+ community.

I’m looking forward to sharing more of Nicole’s work on EAT as we move forward.

Examining admission and formation outcomes for Catholic clergy applicants with the MMPI-2-RF: A Prospective Study

I’m thrilled to have a paper with Anthony Isacco and TTU’s newest faculty member Nick Borgogna accepted for publication in Psychological Assessment. We utilized a sample of catholic clergy who underwent a psychological evaluation to predict if those individuals were ultimately admitted by the Catholic Church to the formation training program, and if they completed that training (typically a 5+ year process). We utilized a series of relative risk estimates for each MMPI-2-RF scale at multiple cut scores for each outcome (admission, completion). We also compared mean scores between the groups to summarize overall patterns of group difference. This prospective evaluation offers support for the MMPI-2-RF in aiding these decisions as elevations on numerous scales were associated with increased risk of less desirable outcomes (e.g., non-admission or non-completion). Lower threshold for clinical scales are recommended.

I have created an APA-style pre-print for the paper that may be downloaded here. This paper is not the copy of record and may not exactly replicate the final, authoritative version of the article. Please do not copy or cite without authors’ permission. The final article will be available, upon publication, via its DOI: 10.1037/pas0001028.

© 2021, American Psychological Association.

Select results are presented and summarized below.

The USCCB guidelines (2015) for psychological evaluations to identify problems with “affective maturity” points our current research towards having a clear definition of what that might look like on the MMPI-2-RF when contrasting group means. Specifically, Emotional/Internalizing Dysfunction (EID) and Dysfunctional Negative Emotions (RC7) require special consideration. Consideration of these scales will likely require adaptations from traditional cut-scores to lowered thresholds. Our study provides support for use of these lower cut-scores in clergy applicants, consistent with existing literature on other public service personnel. While proximal outcomes (admission) are stronger than distant outcomes (formation completion), scales consistently demonstrate increased risk as scores elevated, particularly on the Internalizing pathology scales (e.g., EID, RC7, Self-Doubt [SFD]) and those measuring Somatic/Cognitive Concerns. A notable exception was for RC2, which did not offer the same predictive capacity. Clinicians conducting these evaluations may rely on the MMPI-2-RF for its prospective predictive utility with clergy. In concert with the improved psychometrics of this test/the MMPI-3 over and beyond that of the MMPI-2, a need for transition to modern testing instruments during clergy evaluations is recommended.

New Paper: Treatment outcomes in a residential substance use sample

I’m thrilled to have gotten the e-mail from Nicole Morris this morning letting me know that our paper was accepted into Addictive Behaviors. The study uses a large, multi-site sample of individuals receiving residential substance use treatment and asks if the CES-D (a common depressive measure) is useful for predicting treatment outcomes. The results are a resounding success!

Download the paper HERE

Some brief results are presented below.

  1. The CES-D has 3 factors in residential substance use populations, but scores represents largely the negative mood/anhedonia experiences of depression because of how many items load on that factor
  2. CES-D scores are (unsurprisingly) very high in those undergoing substance use, most exceed a screening cut score for depression
  3. Higher CES-D scores result in worse discharge outcomes (less normal, more administrative and AMA – but there is variation in this among symptom) and there tend to be 3 distinct paths that scores follow over time with separate intercepts (score) and slopes (rate of change)
  4. Drug of choice and gender don’t play a role in the CES-D’s predictive utility in this population

Applying for graduate school?

Applying to graduate school can be super confusing and stressful. I wanted to try and give some insight into what you can do along the way to maximize your potential for admission into a psychology doctoral program, and how I (and many others) view applications and the process in general. There is a lot more that I could say, but I wanted to give some sense of my perspective about applicants and their “fit” with me as a mentor.


A few take-aways

  1. Consider funding – being debt free is important
  2. Make sure the program has good training outcomes
  3. Make sure your mentor is someone you want to be in a professional relationship with for several years (how you are treated matters)
  4. Be flexible in your pathways to career goals, and in your geographic openness
  5. Be proactive in your planning – being a lab self-starter is a HUGE thing
  6. This is an extremely competitive process, so it is possible that you are a good candidate and that you don’t get accepted.

Doing neuropsychology evaluations with the PAI? The CBS scale is an effective validity scale for military populations

Although the PAI is similarly trained and used as the MMPI (Wright et al., 2017; Ingram et al., 2020), it’s validity scales don’t have the same level of study or rate of detection for invalid responding within military populations (see Brittney’s thesis as an example). In fact, even when new validity scales are developed they rarely see themselves assessed beyond this initial validation

Gaasedelen et al (2019) developed the Cognitive Bias Scale (CBS) as a comparative measure to the MMPI-2-RF’s highly effective Response Bias Scale (RBS). The RBS employed criterion coding using PVT failure to identify potentially useful items and as a result of these methods some consider it the most reliable MMPI-2-RF validity scale given its blended SVT/PVT approach (Ingram & Ternes, 2016). The new CBS scale used the same methods to develop a scale assessing feigned cognitive symptoms on the PAI, a domain of symptoms that was previously overlooked. In our most recent study, we replicated their validation in an active duty military sample to see if their suggested cut scores and observations of effectiveness were generalizable.


What did we find?

  1. Large effect sizes differentiated scores on the CBS between those passing PVTs and failing PVTs
  2. The CBS had similarly high specificity and low sensitivity as the initial CBS, and comparable to those observed in the RBS for the MMPI-2-RF
  3. In general, a cut score of 16 is recommended to maximize specificity while also keeping moderate sensitivity

Check out the table below for classification accuracy information (red bolded text are recommended values for each comparison [>.9 spec, ~.3 Sens]):


Brittney defended her thesis: MMPI-2-RF and the PAI in differentiating PTSD and Depressive Disorders


Brittney did absolutely awesome yesterday and defended her thesis. There is too much to say about this project and I’m thrilled about how it went, and how it turned out. We’re starting to transform the project into papers next and anticipate the products Brittney writes as being very useful additions to the literature surrounding differential diagnosis in active duty personnel, greatly expanding available literature with these two instruments.

If you want to catch a glimpse of how she did, here is her presentation (about 27 minutes and includes only her presentation/summary of the findings): Check it out!

Training in Assessment: Current State and Future Directions

About a year and a half ago I launched (and subsequently published) a pilot project with Drs. Adam Schmidt and Matt Cribbet to look at some trends in training for psychological assessment by asking trainees themselves about their experiences. This week, the first paper from the large scale study that followed-up on that pilot based on the feedback we received was published in Training and Education in Professional Psychology (TEPP) in a new article, this time using a national sample of 534 doctoral trainees from across the country in health service psychology programs. GET THE PDF HERE

In short, our results suggest that (i) the patterns of training coverage mirror the instrument use patterns of psychologists who are currently in clinical practice,  (ii) students receive more frequent didactic and classroom exposure during training than practice opportunities with clients, and that (iii) program types [PhD,PsyD] and program areas [Clinical, Counseling] are generally similar in their coverage.  Lets break down what this means to me, as a researcher and professional working to ensure appropriate and effective psychological assessments.

  • It’s good to see training conform to what professionals do, in general. This means that students will likely be ready to step into similar professional roles as those we currently see existing.
  • Having the same content across programs means that in a way, the field has converged on what is the standard of content coverage. That doesn’t mean that all content is covered the same, or that all programs are equally good at training, but it is a good starting point for ensuring high levels of client care.
  • There are some things that students don’t get a lot (or enough) training on. One is structured diagnostic interviewing with 25% of students not having training / or not knowing they were trained (which is just as bad). Another is with symptom validity testing. Response styles are a major part of ensuring appropriate interpretations are made, so not being prepared to assess invalid responding means not being ready to appropriately make diagnostic decisions with the aide of assessments. This is a huge issue for the VA, and for veterans since 50-70% produce invalid symptom response styles on the MMPI (Ingram et al., 2019), making sure that isn’t characterized is critical to what treatment recommendations come next. Good care starts at good training. Say it with me.
  • Students get lots more classroom time than hands on learning, but hands on learning is when the complexity and nuance of this advanced integrative skill come into play. To advance training, there has to be more clinical use – not just class coverage.


There was a lot more to unpack from the article, but those were a few of my reflections. I’m looking forward to where we go next as a research team on this, and am excited to see the work related to performance-based competency and factors impacting use and intention to work with psychological assessment.

MMPI-2-RF and MMPI-3 : Implications on treatment engagement

Nicole recently presented research summarizing a paper we have out for review examining the utility of the MMPI-2-RF and MMPI-3 to predict treatment use and treatment-related attitudes in a short-term longitudinal sample of individuals with moderate to severe depressive symptoms. This was presented at the 2020 MMPI symposium and the PowerPoint may be FOUND HERE [CLICK]


Literature surrounding the MMPI-2-RF has started to demonstrate convergence about which scales best predict treatment engagement and outcomes; however, it is also limited in several ways. For instance, externalizing scales frequently emerge as indicators of treatment dropout (Anestis et al., 2015; Mattson et al., 2012; Tylicki et al., 2019); however, is not always true (Arbisi et al., 2013; Tarescavage et al., 2015). Moreover, studies have frequently focused on outcomes using clients who have already initiated treatment across different settings, while only one has examined the capacity of the MMPI-2-RF scales to predict treatment initiation (Arbisi et al., 2013). As such, there is notable variability in scale-specific findings as well as in the type of behaviors that are predicted on the MMPI-2-RF. In addition, the soon-to-be released MMPI-3 underscores the necessity of establishing a similar research base for this new instrument. While many of its scales are based on existing MMPI-2-RF scales, the new and revised scales have yet to undergo extensive validation. This is the first such study which examines treatment use and engagement among those who would likely benefit from those services but who are not recruited from treatment locations (e.g., some are in therapy and some are not).

Big Takeaways

  1. The MMPI-3 has stronger and broader relationships among criterion measures, suggesting probable improvement over the MMPI-2-RF scales.
  2. Measures within the internalizing and interpersonal domains are the strongest and most predictors, and externalizing scales are also useful.

An overview of general scale-related findings are summarized below: