
New Lab Banner Logo

The Texas Tech Department of Psychological Sciences hosted the first of its now annual, competitive grant program and each proposal underwent an NIH-style panel review. Each proposal was judged on its feasibility, significance, innovation, and methodological approach. With a strong pool of applications, the department was unable to fund all such proposals. Nicole (@NicoleLemaste10 on the Twitterverse) was one of those who was funded and I’m proud of the proposal that they put together so let me just say….
Nicole’s project examines the effectiveness of the Minnesota Multiphasic Personality Inventory-3 (MMPI-3)’s new Eating Concerns (EAT) scale within sexual minority individuals. Below I have provided the three stated aims of their proposal:
Aim 1: Expand interpretive understanding of EAT score elevations with psychological risk factors for eating pathology (e.g., body image, intuitive eating, etc.) Hypothesis: EAT scale scores will demonstrate strong associations with criterion measures of body image and eating behaviors which are risk indicators for AN, BN, and BED. Analytic Plan. Using correlation and regression analyses, I will identify the associations between criterion measures of disordered eating and EAT scale score and individual EAT items.
Aim 2: Explore differential performance of EAT scale scores between the LGBTQIA+ community and heterosexual individuals. Hypothesis: EAT scores will be higher for those that are within the LGBTQIA+ community. Analytic Plan. I willcontrast descriptive characteristics between sexuality and gender minority groups. Between group comparisons will be conducted using t-test and ANOVA and appropriate post hoc testing and effect size calculations.
Aim 3: Investigate cut-score accuracy for identifying the presence of clinical eating pathology in LGBTQIA+ college students. Hypothesis: Alternative cut-scores will provide better classification accuracy for identifying eating pathology in LGBTQIA+ individuals. Analytic Plan. Using classification accuracy statistics (i.e., sensitivity, specificity, high rate, etc.) I will determine if the standard cut score is appropriate for the LGBTQIA+ community.
I’m looking forward to sharing more of Nicole’s work on EAT as we move forward.
I’m thrilled to have a paper with Anthony Isacco and TTU’s newest faculty member Nick Borgogna accepted for publication in Psychological Assessment. We utilized a sample of catholic clergy who underwent a psychological evaluation to predict if those individuals were ultimately admitted by the Catholic Church to the formation training program, and if they completed that training (typically a 5+ year process). We utilized a series of relative risk estimates for each MMPI-2-RF scale at multiple cut scores for each outcome (admission, completion). We also compared mean scores between the groups to summarize overall patterns of group difference. This prospective evaluation offers support for the MMPI-2-RF in aiding these decisions as elevations on numerous scales were associated with increased risk of less desirable outcomes (e.g., non-admission or non-completion). Lower threshold for clinical scales are recommended.
I have created an APA-style pre-print for the paper that may be downloaded here. This paper is not the copy of record and may not exactly replicate the final, authoritative version of the article. Please do not copy or cite without authors’ permission. The final article will be available, upon publication, via its DOI: 10.1037/pas0001028.
© 2021, American Psychological Association.
Select results are presented and summarized below.
The USCCB guidelines (2015) for psychological evaluations to identify problems with “affective maturity” points our current research towards having a clear definition of what that might look like on the MMPI-2-RF when contrasting group means. Specifically, Emotional/Internalizing Dysfunction (EID) and Dysfunctional Negative Emotions (RC7) require special consideration. Consideration of these scales will likely require adaptations from traditional cut-scores to lowered thresholds. Our study provides support for use of these lower cut-scores in clergy applicants, consistent with existing literature on other public service personnel. While proximal outcomes (admission) are stronger than distant outcomes (formation completion), scales consistently demonstrate increased risk as scores elevated, particularly on the Internalizing pathology scales (e.g., EID, RC7, Self-Doubt [SFD]) and those measuring Somatic/Cognitive Concerns. A notable exception was for RC2, which did not offer the same predictive capacity. Clinicians conducting these evaluations may rely on the MMPI-2-RF for its prospective predictive utility with clergy. In concert with the improved psychometrics of this test/the MMPI-3 over and beyond that of the MMPI-2, a need for transition to modern testing instruments during clergy evaluations is recommended.
I’m thrilled to have gotten the e-mail from Nicole Morris this morning letting me know that our paper was accepted into Addictive Behaviors. The study uses a large, multi-site sample of individuals receiving residential substance use treatment and asks if the CES-D (a common depressive measure) is useful for predicting treatment outcomes. The results are a resounding success!
Download the paper HERE
Some brief results are presented below.
Applying to graduate school can be super confusing and stressful. I wanted to try and give some insight into what you can do along the way to maximize your potential for admission into a psychology doctoral program, and how I (and many others) view applications and the process in general. There is a lot more that I could say, but I wanted to give some sense of my perspective about applicants and their “fit” with me as a mentor.
A few take-aways
Although the PAI is similarly trained and used as the MMPI (Wright et al., 2017; Ingram et al., 2020), it’s validity scales don’t have the same level of study or rate of detection for invalid responding within military populations (see Brittney’s thesis as an example). In fact, even when new validity scales are developed they rarely see themselves assessed beyond this initial validation
Gaasedelen et al (2019) developed the Cognitive Bias Scale (CBS) as a comparative measure to the MMPI-2-RF’s highly effective Response Bias Scale (RBS). The RBS employed criterion coding using PVT failure to identify potentially useful items and as a result of these methods some consider it the most reliable MMPI-2-RF validity scale given its blended SVT/PVT approach (Ingram & Ternes, 2016). The new CBS scale used the same methods to develop a scale assessing feigned cognitive symptoms on the PAI, a domain of symptoms that was previously overlooked. In our most recent study, we replicated their validation in an active duty military sample to see if their suggested cut scores and observations of effectiveness were generalizable.
What did we find?
Check out the table below for classification accuracy information (red bolded text are recommended values for each comparison [>.9 spec, ~.3 Sens]):
Brittney did absolutely awesome yesterday and defended her thesis. There is too much to say about this project and I’m thrilled about how it went, and how it turned out. We’re starting to transform the project into papers next and anticipate the products Brittney writes as being very useful additions to the literature surrounding differential diagnosis in active duty personnel, greatly expanding available literature with these two instruments.
If you want to catch a glimpse of how she did, here is her presentation (about 27 minutes and includes only her presentation/summary of the findings): Check it out!
About a year and a half ago I launched (and subsequently published) a pilot project with Drs. Adam Schmidt and Matt Cribbet to look at some trends in training for psychological assessment by asking trainees themselves about their experiences. This week, the first paper from the large scale study that followed-up on that pilot based on the feedback we received was published in Training and Education in Professional Psychology (TEPP) in a new article, this time using a national sample of 534 doctoral trainees from across the country in health service psychology programs. GET THE PDF HERE
In short, our results suggest that (i) the patterns of training coverage mirror the instrument use patterns of psychologists who are currently in clinical practice, (ii) students receive more frequent didactic and classroom exposure during training than practice opportunities with clients, and that (iii) program types [PhD,PsyD] and program areas [Clinical, Counseling] are generally similar in their coverage. Lets break down what this means to me, as a researcher and professional working to ensure appropriate and effective psychological assessments.
There was a lot more to unpack from the article, but those were a few of my reflections. I’m looking forward to where we go next as a research team on this, and am excited to see the work related to performance-based competency and factors impacting use and intention to work with psychological assessment.
Nicole recently presented research summarizing a paper we have out for review examining the utility of the MMPI-2-RF and MMPI-3 to predict treatment use and treatment-related attitudes in a short-term longitudinal sample of individuals with moderate to severe depressive symptoms. This was presented at the 2020 MMPI symposium and the PowerPoint may be FOUND HERE [CLICK]
Abstract
Literature surrounding the MMPI-2-RF has started to demonstrate convergence about which scales best predict treatment engagement and outcomes; however, it is also limited in several ways. For instance, externalizing scales frequently emerge as indicators of treatment dropout (Anestis et al., 2015; Mattson et al., 2012; Tylicki et al., 2019); however, is not always true (Arbisi et al., 2013; Tarescavage et al., 2015). Moreover, studies have frequently focused on outcomes using clients who have already initiated treatment across different settings, while only one has examined the capacity of the MMPI-2-RF scales to predict treatment initiation (Arbisi et al., 2013). As such, there is notable variability in scale-specific findings as well as in the type of behaviors that are predicted on the MMPI-2-RF. In addition, the soon-to-be released MMPI-3 underscores the necessity of establishing a similar research base for this new instrument. While many of its scales are based on existing MMPI-2-RF scales, the new and revised scales have yet to undergo extensive validation. This is the first such study which examines treatment use and engagement among those who would likely benefit from those services but who are not recruited from treatment locations (e.g., some are in therapy and some are not).
Big Takeaways
An overview of general scale-related findings are summarized below:
MMPI-2-RF
MMPI-3