Nicole recently presented research summarizing a paper we have out for review examining the utility of the MMPI-2-RF and MMPI-3 to predict treatment use and treatment-related attitudes in a short-term longitudinal sample of individuals with moderate to severe depressive symptoms. This was presented at the 2020 MMPI symposium and the PowerPoint may be FOUND HERE [CLICK]
Literature surrounding the MMPI-2-RF has started to demonstrate convergence about which scales best predict treatment engagement and outcomes; however, it is also limited in several ways. For instance, externalizing scales frequently emerge as indicators of treatment dropout (Anestis et al., 2015; Mattson et al., 2012; Tylicki et al., 2019); however, is not always true (Arbisi et al., 2013; Tarescavage et al., 2015). Moreover, studies have frequently focused on outcomes using clients who have already initiated treatment across different settings, while only one has examined the capacity of the MMPI-2-RF scales to predict treatment initiation (Arbisi et al., 2013). As such, there is notable variability in scale-specific findings as well as in the type of behaviors that are predicted on the MMPI-2-RF. In addition, the soon-to-be released MMPI-3 underscores the necessity of establishing a similar research base for this new instrument. While many of its scales are based on existing MMPI-2-RF scales, the new and revised scales have yet to undergo extensive validation. This is the first such study which examines treatment use and engagement among those who would likely benefit from those services but who are not recruited from treatment locations (e.g., some are in therapy and some are not).
- The MMPI-3 has stronger and broader relationships among criterion measures, suggesting probable improvement over the MMPI-2-RF scales.
- Measures within the internalizing and interpersonal domains are the strongest and most predictors, and externalizing scales are also useful.
An overview of general scale-related findings are summarized below:
This is a difficult time for our society, as well as for our ourselves, our clients, and our communities. This year has posed substantial challenges to those within the United States, as well as those around the world. Most recently, the #BlackLivesMatter movement laid bare once again the difficulties that many individuals in our department, community, and country continue to face. In times such as these, it is important to me that I express my unwavering support for this movement, and for the causes it represents. The tragic deaths and the resulting protests are a direct result the of the historic and systemic unequal treatment of racial and ethnic minorities. George Floyd, Ahmaud Arbery, Breonna Taylor, Eric Garner, and Philando Castile represent only a fraction of the violence against minorities in the United States now, as well as over our 400-year history. Black individuals and communities, as well as other minority groups, deserve equality. They deserve safety and freedom from oppression. A core mission of counseling psychology is for the advocacy of mental health and human welfare. Thus, it is important that we, united as a lab and academic community, stand firmly against racism, discrimination, and inequality. The PATS lab is in solidarity with #BlackLivesMatter, and with all others who advocate for equality and justice. As a pillar of counseling psychology, we support and advocate for social justice and social change. I am proud to do so. We will continue to do so.
While these times are challenging, I am also encouraged by the recent ruling from the Supreme Court of the United States extending Title VII of the Civil Rights Act of 1964 to gender identity and sexual orientation. This ruling is a landmark decision which clarifies what we already knew as a profession – that all individuals are worthy of love, deserving of respect, and of equal value. Unfortunately, this monumental victory also comes in the wake of protections being rolled back to our LGBTQ+ community in health care settings. This is heartbreaking outcome to our lab considering we directly work to improve, implement, and disseminate research that supports clinicians providing empirically supported and diversity informed treatment. To this end, while also mourning the ongoing tragedies of racial and ethnic injustice, we celebrate Pride Month and the progress made during it. Positive steps in social change underscore our belief that change is happening, and that progress is possible. This change, and the beliefs espoused here, are central to our lab identity, culture, and mission
Recently accepted to Spirituality in Clinical Practice. Pre-print coming soon.
Effective screening and psychological assessment is critical for identifying and screening out clergy candidates which may be inappropriate for these callings, based on psychopathology, addictive behavior, emotional immaturity, personality characteristics incongruent with effective ministry, or deviant sexual interests and behaviors. Emotional deficits such as avoidance of negative emotions and inability to cope with negative affect have been identified as known risk factors for various problematic behaviors for which psychological assessment of clergy applicants are intended to identify (e.g., sexual offending). Likewise, a preponderance of negative emotions (i.e. anger, impatience, irritability, and resentment) are regularly found in clergy credibly accused of sexual misconduct.
In clergy applicants (n= 137) undergoing psychological assessments as part of their evaluation for a diconate formation program between 2013 and 2019, correlational relationships between 16PF and MMPI-2-RF scale scores were examined. Partial reults are presented below.
Significant relationships were most pronounced between 16PF global factors of Anxiety and Self-Control and the MMPI-2-RF scales of emotional (e.g., inefficacy, self-doubt, anxiety) and interpersonal dysfunction (e.g., social avoidance, family problems), consistent with an emotional deficits hypothesis. Behaviorally disconstrained relationships were less pronounced and irregular, perhaps owing to the low rate of clinically-elevated scores and the restricted range of scores due to defensive responding, as evidenced by high K-r/L-r scores. Thus, rather than focusing on egregious problematic behavior that is likely minimized or hidden in an admission context, psychological evaluations may be more effective by focusing on problematic emotional states as well as candidates’ ability to manage stress and cope with challenges (e.g., Baer & Miller, 2002). Based on results observed within this study, the MMPI-2-RF appears well-suited to this task.
Last year Brian Cole (Assistant Professor of Counseling Psychology and Director of Training at the University of Kansas) and I had an article examining the ways that stigma and masculinity predict different types of treatment seeking behaviors. KU had a news release today on his work on men’s depression, which includes our PAPER [click here for a pre-print version].
It’s always cool to see work with colleagues get spotlighted. I can’t wait to do more with him in the next project on depression help seeking.
The first (but not last) paper with Joe Currin was accepted yesterday in Counseling Psychology Quarterly. In this paper, we examine ECP (those graduating within the last 10 years) Counseling Psychologists who are currently employed in Tenure Track (TT) positions at the 69 doctoral training programs for Counseling Psychology (APA, 2018) across the country. Given the centrality of research during the tenure process and the intentional vagueness of tenure guidelines (a positive for review committees to allow for latitude, and a negative for TT faculty seeking assurance), we wanted to know what trends in research are common over the last decade, what is typical for faculty, and if R1/R2 Carnegie institution distinctions made a difference (given the different expectations, support, etc.).
READ THE PRE-PRINT OF THE PAPER HERE
A few things stood out to us based on the sample (publications were collected using Web of Science accounts by year for all ECP TT faculty for all with a searchable account).
- ECP Faculty published an average 9.4 articles over their first six years (SD = 12.5). R1 Faculty published more (M = 12.2; SD = 14.7) than R2 faculty (M = 8.6; SD = 8.6), t(158)=3.26, p < .01, d = .53.
- Google Scholar metrics for the subsample of those with GScholar accounts (n = 92; 57.5% of sample) provide the following descriptive information:
- h-index M = 10.3, SD 7.8,
- R1 H-index M=12.1, SD = 8.4
- R2 H-index M=7.1, SD = 5.0
- i10-index M = 12.7, SD = 17.1
- R1 i10-index M = 15.9, SD = 20.0
- R2 i10-index M = 6.9, SD = 6.9
- Year by Year average of publications for ECP (analysis excluded year that faculty started their position, as work published there is reflective of work conducted elsewhere and may well have published prior to starting their position)
- Year 1 M = 2.6, SD = 2.5
- Year 2 M = 3.5, SD = 2.8
- Year 3 M = 3.3, SD = 2.9
- Year 4 M = 3.3, SD = 3.5
- Year 5 M = 4.4, SD = 4.6
- Year 6 M = 5.1, SD = 6.7
- Year 1 M = 1.4, SD = 1.6
- Year 2 M = 1.7, SD = 1.5
- Year 3 M = 2.2, SD = 1.7
- Year 4 M = 1.9, SD = 2.0
- Year 5 M = 3.8, SD = 3.1
- Year 6 M = 3.7, SD = 4.3
- Year by year of 1st authored publications
- Year 1 M = 1.0, SD = 1.4
- Year 2 M = 1.8, SD = 1.6
- Year 3 M = 1.6, SD = 1.4
- Year 4 M = 1.5, SD = 1.3
- Year 5 M = 2.1, SD = 2.1
- Year 6 M = 1.6, SD = 1.6
- Year 1 M = 0.6, SD = 0.9
- Year 2 M = 0.9, SD = 1.1
- Year 3 M = 1.0, SD = 1.0
- Year 4 M = 0.9, SD = 1.4
- Year 5 M = 1.8, SD = 1.4
- Year 6 M = 1.9, SD = 1.5
Our hope within this project was to provide some sense of what is normal within the most recent cohort of early career faculty that have entered (and remained) within TT faculty positions. Rates of those currently pre-tenure are slightly higher than those who have obtained tenure, likely reflecting the increased pressure to ‘publish or perish’; however, these differences were substantially less than a full publication indicating that trends are increasingly from 10 years ago but that those differences are relatively modest (~.5 publication or less by year when contrasting tenured with pre-tenure faculty during their first 3 years of faculty position, at both R1 and R2 institutions).
The lab didn’t win the TTU Psych Department’s lab logo contest, but we certainly did well with a strong showing coming up short to our departmental collaborator’s lab, Dr. Schmidt. There were so many super cool logos from all the labs.
Dr. Schmid’ts lab design was made by our lab’s very own Liz Morger [CLICK ME], and this is a good chance to plug some of her work. She will be presented at the Association for Psychological Science this year with some work based on a collaboration between our labs on the MMPI-A-RF in adolescents assessed at a local youth detention center as part of a larger study on trauma.
Title: Childhood Adversity and Externalizing Behaviors Among Justice-Involved Youth
Abstract: The current study examined differences between the total score of the ACE Questionnaire and four subscales of the MMPI-A-RF that measure externalizing dysfunction. Independent t-tests and correlations indicate that justice-involved youth who have higher ACE scores exhibit significantly more behavioral dysfunction and substance abuse.
I just got out of a meeting with Dr. Schmidt yesterday talking about one of our papers on training outcomes and it occurred to me that a summary of those projects is long overdue here. So here we go – how is PATS working to help train better clinicians and inform the field about outcome standards:
- We have a paper in review looking at the pedagogical practices of assessment training based on a review of syllabi from instructors all around the country. There are conceptual frameworks for what should happen (teaching knowledge, skills, and attitudes; Kaslow, 2018) but we wanted to know if this is happening and how students are achieving mastery of those domains?
- The national survey of graduate student trainees in HSP finished collection last summer and we are in the midst of a few papers from that. We have one out in review with a state of the field summary of training experiences in psychological assessment. We have another being presented in a few months at the Society for Personality Assessment about what factors (conceptually grouped as trainee characteristics, clinical experiences, or program traits) predict the intention to utilize psychological assessments in clinical practice. After those two we will be focusing on the third, and last, part of that project – performance based benchmarks
- After a survey of training directors at APPIC internship sites, we are finalizing two manuscripts giving a (much needed) update to what internship sites find important. The first evaluates the different elements of the APPIC on a numeric system and compares across different site types what matters to get an interview, and then what matters in terms of ranking order once they interview. The second paper is a qualitative analysis of what ‘fit’ is so that trainees can understand what they actually need to be describing when they write letters and personal statements.
It should be a big year for training.
I’m excited to have another paper in press, this time at the Journal of Personality Assessment. This paper utilizes the same national sample I’ve previously published on to examine service era differences, provide comparison groups for specific treatment clinics, and examine trends in validity scale performance. In my new paper, we provide correlations between the MMPI-2-RF and a variety of commonly utilized self-report measures of symptom severity within the VA (e.g., those for anxiety, depression, and PTSD). This, in conjunction with these other VA papers, offers an interpretive framework for clinicians to use as they interpret assessment profiles for Veterans receiving care.
You can download the pre-print of the accepted paper HERE.
Here are a few of the key take aways that stand out to me:
- Of those given the PCL in the sample (a PTSD screener for DSM-5), the average score is a 50 (the cut score recommended for screening for PTSD is a 33).
- Across the board, DSM-4 screeners for PTSD have stronger/more reliable relationships to content associated with a PTSD diagnosis on the MMPI-2-RF than the DSM-5 PTSD screener. The reasoning for this is unclear.
- The BDI-2 (a measure of depression that makes up part of a major suicide screener) is not related meaningfully to any of the types of indicators you would expect (e.g., hopelessness, suicidality, depression, anhedonia) on the MMPI-2-RF. That’s really surprising and alarming given how it gets used in the VA.
- Adding to the last point, elevations are fairly high across a lot of measures (both screeners and those on the MMPI-2-RF). This is consistent with the high rate of disorders in the VA and suggest that those who get assessed are likely to experience a variety of mental health problems.
Here are those relationships for the internalizing and scales of the MMPI-2-RF
This year the lab had two papers accepted with Brittney on them. The most recent was accepted into the Journal of Clinical and Experimental Neuropsychology and covered her AACN presentation from this summer on use of the MMPI-2-RF to detect invalid responding in a sample of active duty army. Download the paper HERE.
Recap of the presentation on her most recent publication:
And here is the article from earlier this year on the PAI response profiles using a sample collected on a VA outpatient PTSD Clinic
Ingram, Sharpnack, Mosier, & Golden (in press) Evaluating symptom endorsement typographies of trauma-exposed veterans on the Personality Assessment Inventory (PAI): A latent profile analysis
Note. This post has been updated to include a link to the paper on the MMPI-2-RF validity scales with the military sample (update 1/4/2020).
Another great door decorating year for the lab. We had a lab meeting yesterday and all the undergrads brought their greatest ideas – we settled on the evil laboratory door and it turned out great. There were working on this til 6:30 last night and I love it.
Admittedly, the whole department has some awesome doors and I’m digging the bracket challenge we’re doing.
Now the next question, how long can you leave Halloween decorations up for?
Question after that, when is too early to start planning for next year?