Some colleagues and I here at Texas Tech are taking a look at training trends in assessment (Click me for the presentation PPT) and the implications that has for the validity of instrument use. After all, if scales are valid indicators of their intended constructs but are not used appropriately or consistently (especially during graduate school when supervision and feedback is the highest/the material the most fresh) then there should be concerned about how this translates into practice, subsequent to graduation. The poster being presented in two weeks is taking a look at personality instruments specifically and there are a few things to note:
- Clinical and Counseling Psychology don’t differ on a majority of training variables (either perceived or actual, skill-based assessment outcomes). Partial results from this are reported in the poster. Assessment is a vital domain of psychology and this supports my position that counseling psychology is just as ready to do it as anyone else in applied psychology
- Frequency of use for personality instruments in training settings mirrors those observed in instrument sales, suggesting practicum is an important part of the training in assessment. Accordingly, courses do not reflect this same pattern and are not likely to prepare people for the instruments they are likely to encounter (e.g., MMPI-2 versus MMPI-2-RF rates).
- Skills in the interpretation of the MMPI-2 hold over fairly well into the MMPI-2-RF, with no significant differences between narrative profile interpretation or symptom frequency (unreported here) estimation.
- Most individuals completed the entire study, but a sizable portion quit the moment they were asked to interpret personality measures. This is not an issue of data missing completely at random and suggests that there is a distinctive reaction by graduate students when they see personality assessment tasks. I suspect its tied to competency and self-perceptions of under-preparedness. Conclusions about how stable objective personality measure interpretations are is greatly limited by this sizable, self-selected exclusion by participants.
- There are substantial variations in training patterns with regards to what is reported in personality testing instruments for objective personality measures. As neuropsychological assessment is doing, there should be a more standardized practice of reporting results so as to improve personality assessment.
Disclaimer: Data here represents a preliminary analysis of the project and cover only selected thematic results. Results here are consistent with those seen in the full sample. Full results are being prepared for paper and will follow in a subsequent post in the future.
Personal Impression from results: As a field, we cannot focus exclusively on scale validity/treatment efficacy/etc. to ensure that training of those topics is sufficient. There needs to be an APA-led initiative on training standards so that we can ensure fidelity and adhere to our role as social scientists.
Below are the presentations being presented this year by the lab. If you will be at APA or the MMPI symposium, come see what we’re doing in person.
Ingram, P.B., Tarescavage, A.M., Ben-Porath, Y.S., & Oehlert, M.E. (2018, April). The MMPI-2-Restructured Form (MMPI-2-RF) Validity Scales: Patterns Observed across Veteran Affairs Settings. Paper accepted at the 2018 MMPI-2-RF/MMPI-2/MMPI-A-RF/MMPI-A Conference.
Ingram, P.B., Cribbet, M., & Schmidt, A.T. (2018, April). Training Trends in the Interpretation of the MMPI-2/MMPI-2-RF/MMPI-A/and MMPI-A-RF. Poster accepted at the 2018 MMPI-2-RF/MMPI-2/MMPI-A-RF/MMPI-A Conference.
Ingram, P.B., Parkman, T.J., Staggs, B., & Ternes, M.S. (2018, August). The MMPI-2-RF over-reporting scales: A meta-analysis of criterion-grouped clinical samples. Poster accepted at the 2018 American Psychological Conference.
MMPI-3: Revision of MMPI-2 or Marketing Hype?
The MMPI-3 is the next step because the dated psychometric standards that the MMPI/MMPI-2 are based on can’t continue to guide our interpretation of personality constructs. They simply don’t align to modern psychometric practice or to contemporary understanding of what personality is/how it is structured. The concerns outlined in this article do a poor job of representing the literature and I would have expected more from a commentary about the future of the most popular personality instrument. The author doesn’t even cite the rebuttals to the criticisms of the RF, leading the article to be poorly balanced. The RF isn’t perfect and there are numerous ways in which it needs to be refined (e.g., validity scale moderation and detection rates per Ingram & Ternes, 2016). There are also tons of important clinical constructs that aren’t measured (Borderline Personality, etc.), but these patterns are true of the MMPI-2 as well, in addition to the other issues (subtle items add noise not precision, poor homogeneity of scale constructs, etc.). Of course financial and market-based decisions influence these types of advancements, but from a practical standpoint- the RF outperforms the MMPI-2.
The truth is that the MMPI-3 is coming and it needs to. There are problems with the item set that the RF is constructed from – it needs to broaden and that means moving farther from the context of the MMPI-2. This move will allow opportunity to correct some of the issues inherent to the current scales and should allow better alignment to psychometric and psychopathology theory. Transitions aren’t without problems and advancement isn’t always universal, but it is impossible to ignore that the age of the MMPI-2 is past.
Just a quick update of some goings on in the lab since the summer. This is just a snapshot of some of the projects in the works at different stages.
– Another meta-analytic evaluation of the MMPI-2-RF has completed data collection and analysis is beginning this week.
-IRB submission is beginning within a neuropsych clinic for active duty military. This project will examine the discriminant capacity of MMPI-2-RF validity scales. There is no timeline for this project currently.
-IRB submission is being finalized for one Veteran sample within an outpatient posttraumatic stress disorder clinic to examine the role of clinical personality measures in the classification of PTSD. This clinic utilized the PAI and MCMI. It is anticipated that analysis will be able to begin within 1-2 months.
-Analysis of the nationally-based veteran sample is ongoing. A first manuscript from this database is being prepared currently. Some preliminary data for this project was presented at the 2017 MMPI-2-RF/MMPI-2/MMPI-A-Rf/MMPI-A conference.
-IRB submission is being prepared for a malingering study of Veterans as it relates to comorbid PTSD and mTBI. This project is projected to begin data collection in Spring 2018.
Treatment Seeking and Initiation/Engagment
-IRB has approved a data collection project on stigma of mental health and how it relates to health factors. Data collection is beginning now in conjunction with a second University.
– A manuscript examining the role of masculine identity in treatment seeking is being prepared and is anticipated for submission within 1-2 months. This is based on the poster I presented with Dr. Brian Cole at 2017’s APA.
This was a great conference and a lot of ideas are still stirring around in my head from the conversations I had over the last several days. I tried to share some thoughts on twitter (@IngramPsychLab) as I went, but I think the biggest take away is the importance of moving stigma research beyond correlations with attitudes about intention to seek.
Promoting engagement in psychotherapy needs to expand to behavioral correlates for us to understand the role stigma plays. It also became really clear to me that stigma should be couched as a health disparity barrier and that research on behaviors associated with self-stigma should also expand to encapsulate health psychology and the preventable impacts that it has. This is a very real, very evident, and very serious impact of stigma and the more we do to bring this component of it to the forefront of stigma research the more attention I hope we can generate on changing the culture around mental illness. There were lots of exciting conversations that happened surrounding next-steps for this at the conference – stay tuned!
The 2017 APA conference is less than a month away and getting excited for the annual trip, this time to Washington D.C. Want to talk? Come see work posters of Dr. Ingram is part of at the conference center! Poster locations are in parentheses.
11:00-11:50 Self-stigma, gender role conflict, and help seeking (F-24)
12:00-12:50 The Role of Hope and Stigma in Treatment Seeking (E-14)
12:00-12:50 Structural evaluation of RIASEC assessment methods (E-21)
9:00-9:50 Readiness to Change As a Treatment for Stigma (A-27)
I’m thrilled to have my meta-analysis of the MMPI-2-Restructured Form (MMPI-2-RF) recognized as runner-up for the best manuscript of the year by the American Academy of Clinical Neuropsychology (AACN) in their 2016 student research competition. The article was published last year in the academy’s flagship journal.
Ingram, P.B., & Ternes, M. (2016). The detection of content-based invalid responding: A meta-analysis of the MMPI-2-Restructured Form’s (MMPI-2-RF) over-reporting scales. The Clinical Neuropsychology, 30, 473-496. doi: 10.1080/13854046.2016.1187769
This paper was not only the first meta-analysis of the MMPI-2-RF validity scales but also provided an analysis of some important moderators deferentially impacting the detection of feigning across the over-reporting scales. Findings highlight the importance of considering context, client, and evaluation-specific information in making clinical interpretations. There are lots of exciting ways I will be building on this moving forward; I’ll be examining interactions between moderators, further study on specific clinical presentations like PTSD and TBI, and more.
Do you want to know more or are you interested in collaborating? Send me an E-mail.