Assessment training

Following the the lab’s pilot research (see Ingram, Cribbet, and Schdmit, 2018) on assessment training and educational outcomes in health service psychology, Dr. Schmidt and I have submitted a grant to evaluate these trends across the nation in all APA-approved programs. We hope our work will inform health service psychology training and promote a more empirically-based approach to psychological assessment. We’re very excited for what this project can contribute to graduate education and training in our field, so keep your fingers crossed! It’s time to make sure our profession gets the training support and emphasis needed to be successful.

#BetterScience

#BetterPsychology

New Lab Logo and Quick Update

Paul Lab Logo

Thanks to the awesome Dr. Joe Currin for making our amazing lab logo!

There is also so much going on as well in the lab for both assessment research and the treatment seeking work. I have several papers out right now on the MMPI-2-RF, much of it based on the papers we have presented in the last at the MMPI symposium (such as thisthis, and this).  I’m also working with Brittney and Nikki Brittney and Nikki on another paper we plan to submit shortly that is evaluating PTSD assessment tools. Likewise, several undergraduates are joining in the lab to help us collect data and one has even started to put together an exciting research proposal based on available data (more to come soon on that!). Although the grant to evaluate PTSD in an incarcerated minority youth sample wasn’t funded this summer, Dr. Adam Schmidt and I received a very favorable review and an invited re-submission next year and we are looking forward to that in a few months from now.

As for stigma and treatment seeking research, data collection for the gender roles and Latinx project is ongoing. Recruitment picked up this year and are in talks about poster submissions using that data at APA. Data collection on stigma and health behaviors in response to physical pain ended last spring and Dr. Shinye Kim and I are in the midst of analysis. More to come!

Fall Semester is almost here!

I can’t believe I’m starting year two as an assistant professor already! This one is going to be a busy year for sure and I’m excited to have my new advisee Brittney Golden joining the lab. I just got out of several meetings today and there are some exciting research projects going this fall. Also, I anticipate taking a student next year (2019-2020 maturation) and am looking forward to hearing from potential applicants.

Just around the corner this fall is the Combat PTSD conference in San Antonio and my colleagues and I have two poster presentations going.

  • Ingram, P.B., Sharpnack, J.D., & Mosier, N.J. (2018, October). The Utility of Personality Assessment Inventory in assessing PTSD for a group of treatment seeking combat veterans. Poster accepted for presentation at the 2018 Combat PTSD Conference. San Antonio, Texas.
  • Mosier, N.J., Sharpnack, J.D., & Ingram, P.B. (2018, October). Evidence-based practice begins with Evidence-based Assessment: Towards a model of multidimensional evaluation of PTSD symptoms and treatment outcomes. Poster accepted for presentation at the 2018 Combat PTSD Conference. San Antonio, Texas.

After that it’s on to preparing submissions for APA. The lab is trying hard to get enough data for our comorbidity feigning study to allow us to present that data at the conference. We’ve got some other fun things we’re going to try and send out over the next few months (papers and presentations).

MMPI Research Update today

Today was a busy day for MMPI Research with a lot of exciting progress.

The Validity Scale Presentation from the 7-year MMPI-2-RF sample from VAs around the country that will be presented in two weeks at the 2018 MMPI symposium has been finished. This presentation examines validity scale response style within the VA and there are some of the big conclusions from the presentation.

  • MMPI-2-RF profiles are frequently invalid within the VA
  • Validity scale performance varies by outpatient service location
  • Over-reporting (OR) scales are the most frequently invalidated scales
  • Those exceeding interpretive guidelines on one OR scale are likely to exceed others

The other update? I submitted a grant to the University of Minnesota Press Test Division to evaluate MMPI-A-RF responses within a justice involved sample here in Texas. This project is in collaboration with Dr. Adam Schmidt in the Clinical program here at TTU and has three aims: (1) expand information available about adolescent forensic evaluations, (2) increase sampling of diverse and minority status youth, and (3) explore the relationship of the MMPI-A-RF scales to PTSD and within a trauma-exposed sample.

Personality Assessment Training

Some colleagues and I here at Texas Tech are taking a look at training trends in assessment (Click me for the presentation PPT) and the implications that has for the validity of instrument use. After all, if scales are valid indicators of their intended constructs but are not used appropriately or consistently (especially during graduate school when supervision and feedback is the highest/the material the most fresh) then there should be concerned about how this translates into practice, subsequent to graduation. The poster being presented in two weeks is taking a look at personality instruments specifically and there are a few things to note:

Personality Assessment Interpretation Poster 2018

  1. Clinical and Counseling Psychology don’t differ on a majority of training variables (either perceived or actual, skill-based assessment outcomes).  Partial results from this are reported in the poster. Assessment is a vital domain of psychology and this supports my position that counseling psychology is just as ready to do it as anyone else in applied psychology
  2. Frequency of use for personality instruments in training settings mirrors those observed in instrument sales, suggesting practicum is an important part of the training in assessment. Accordingly, courses do not reflect this same pattern and are not likely to prepare people for the instruments they are likely to encounter (e.g., MMPI-2 versus MMPI-2-RF rates).
  3. Skills in the interpretation of the MMPI-2 hold over fairly well into the MMPI-2-RF, with no significant differences between narrative profile interpretation or symptom frequency (unreported here) estimation.
  4. Most individuals completed the entire study, but a sizable portion quit the moment they were asked to interpret personality measures. This is not an issue of data missing completely at random and suggests that there is a distinctive reaction by graduate students when they see personality assessment tasks. I suspect its tied to competency and self-perceptions of under-preparedness. Conclusions about how stable objective personality measure interpretations are is greatly limited by this sizable, self-selected exclusion by participants.
  5. There are substantial variations in training patterns with regards to what is reported in personality testing instruments for objective personality measures. As neuropsychological assessment is doing, there should be a more standardized practice of reporting results so as to improve personality assessment.

Disclaimer: Data here represents a preliminary analysis of the project and cover only selected thematic results. Results here are consistent with those seen in the full sample. Full results are being prepared for paper and will follow in a subsequent post in the future.

Personal Impression from results: As a field, we cannot focus exclusively on scale validity/treatment efficacy/etc. to ensure that training of those topics is sufficient. There needs to be an APA-led initiative on training standards so that we can ensure fidelity and adhere to our role as social scientists.

Upcoming presentations of research

Below are the presentations being presented this year by the lab. If you will be at APA or the MMPI symposium, come see what we’re doing in person.

Ingram, P.B., Tarescavage, A.M., Ben-Porath, Y.S., & Oehlert, M.E. (2018, April). The MMPI-2-Restructured Form (MMPI-2-RF) Validity Scales: Patterns Observed across Veteran Affairs Settings. Paper accepted at the 2018 MMPI-2-RF/MMPI-2/MMPI-A-RF/MMPI-A Conference.

Ingram, P.B., Cribbet, M., & Schmidt, A.T. (2018, April). Training Trends in the Interpretation of the MMPI-2/MMPI-2-RF/MMPI-A/and MMPI-A-RF. Poster accepted at the 2018 MMPI-2-RF/MMPI-2/MMPI-A-RF/MMPI-A Conference.

Ingram, P.B., Parkman, T.J., Staggs, B., & Ternes, M.S. (2018, August). The MMPI-2-RF over-reporting scales: A meta-analysis of criterion-grouped clinical samples. Poster accepted at the 2018 American Psychological Conference.

The MMPI-3 is coming (and it needs to)

MMPI-3: Revision of MMPI-2 or Marketing Hype?

The MMPI-3 is the next step because the dated psychometric standards that the MMPI/MMPI-2 are based on can’t continue to guide our interpretation of personality constructs. They simply don’t align to modern psychometric practice or to contemporary understanding of what personality is/how it is structured. The concerns outlined in this article do a poor job of representing the literature and I would have expected more from a commentary about the future of the most popular personality instrument. The author doesn’t even cite the rebuttals to the criticisms of the RF, leading the article to be poorly balanced. The RF isn’t perfect and there are numerous ways in which it needs to be refined (e.g., validity scale moderation and detection rates per Ingram & Ternes, 2016). There are also tons of important clinical constructs that aren’t measured (Borderline Personality, etc.), but these patterns are true of the MMPI-2 as well, in addition to the other issues (subtle items add noise not precision, poor homogeneity of scale constructs, etc.). Of course financial and market-based decisions influence these types of advancements, but from a practical standpoint- the RF outperforms the MMPI-2.

The truth is that the MMPI-3 is coming and it needs to. There are problems with the item set that the RF is constructed from – it needs to broaden and that means moving farther from the context of the MMPI-2. This move will allow opportunity to correct some of the issues inherent to the current scales and should allow better alignment to psychometric and psychopathology theory. Transitions aren’t without problems and advancement isn’t always universal, but it is impossible to ignore that the age of the MMPI-2 is past.

Research Updates

Just a quick update of some goings on in the lab since the summer. This is just a snapshot of some of the projects in the works at different stages.

Personality Assessment
– Another meta-analytic evaluation of the MMPI-2-RF has completed data collection and analysis is beginning this week.
-IRB submission is beginning within a neuropsych clinic for active duty military. This project will examine the discriminant capacity of MMPI-2-RF validity scales. There is no timeline for this project currently.
-IRB submission is being finalized for one Veteran sample within an outpatient posttraumatic stress disorder clinic to examine the role of clinical personality measures in the classification of PTSD. This clinic utilized the PAI and MCMI. It is anticipated that analysis will be able to begin within 1-2 months.
-Analysis of the nationally-based veteran sample is ongoing. A first manuscript from this database is being prepared currently. Some preliminary data for this project was presented at the 2017 MMPI-2-RF/MMPI-2/MMPI-A-Rf/MMPI-A conference.
-IRB submission is being prepared for a malingering study of Veterans as it relates to comorbid PTSD and mTBI. This project is projected to begin data collection in Spring 2018.
Treatment Seeking and Initiation/Engagment
-IRB has approved a data collection project on stigma of mental health and how it relates to health factors. Data collection is beginning now in conjunction with a second University.
– A manuscript examining the role of masculine identity in treatment seeking is being prepared and is anticipated for submission within 1-2 months. This is based on the poster I presented with Dr. Brian Cole at 2017’s APA.

APA Reflections

This was a great conference and a lot of ideas are still stirring around in my head from the conversations I had over the last several days. I tried to share some thoughts on twitter (@IngramPsychLab) as I went, but I think the biggest take away is the importance of moving stigma research beyond correlations with attitudes about intention to seek.

Promoting engagement in psychotherapy needs to expand to behavioral correlates for us to understand the role stigma plays. It also became really clear to me that stigma should be couched as a health disparity barrier and that research on behaviors associated with self-stigma should also expand to encapsulate health psychology and the preventable impacts that it has.  This is a very real, very evident, and very serious impact of stigma and the more we do to bring this component of it to the forefront of stigma research the more attention I hope we can generate on changing the culture around mental illness. There were lots of exciting conversations that happened surrounding next-steps for this at the conference – stay tuned!

APA

The 2017 APA conference is less than a month away and getting excited for the annual trip, this time to Washington D.C. Want to talk? Come see work posters of Dr. Ingram is part of at the conference center! Poster locations are in parentheses.

Thursday

11:00-11:50       Self-stigma, gender role conflict, and help seeking (F-24)

12:00-12:50       The Role of Hope and Stigma in Treatment Seeking (E-14)

12:00-12:50       Structural evaluation of RIASEC assessment methods (E-21)

Friday

9:00-9:50           Readiness to Change As a Treatment for Stigma (A-27)