Dr. Currin and I finished our analysis of publication trends among academic counseling psychology faculty today and now I’m finishing up the discussion section now. The irony of writing a paper completely collaboratively and not taking authorship too seriously in academia where publish or perish really struck a cord with both of us. So… we decided our authorship completely by a competitive (but 100% not serious) game of Zombie Dice.
We even had an judge (who’s office we used so that we had ‘neutral territory’)- a big thanks to Dr. Lindsay Greenlee for her officiating. In case you can’t tell from Joe’s deflated posture, I won (this was my best impersonation of Gaston that I could muster).
A new collaborator of mine (Dr. Pat Armistead-Jehle) is presenting some work Brittney Golden and I have been doing with him on the MMPI-2-RF validity scales in the detection of failed performance validity testing at the upcoming American Academy of Clinical Neuropsychology (AACN) conference. Brittney did a fantastic job as first author on this poster.
Want to know more? Download our poster [click here]
Golden, B., Ingram, P.B., & Armistead-Jehle, P.J. (2019, June). Evaluating the effectiveness of the MMPI-2-RF Over-reporting Scales in a military neuropsychology clinic. Poster accepted for presentation at the 2019 American Academy of Clinical Neuropsychology Conference. Chicago, IL.
This is one of my favorite days of teaching Theories and Interventions. At the end of the semester, after breaking up sections of specific theories/mechanisms of change (e.g., dynamic, cognitive, behavioral, etc.) for the months before, I introduce students to the debate between common and specific factors. After a few days of lecture, I break them into debate teams to address some of the ‘big picture’ questions that come from this debate.
I ask things like: (a) do ESTs provide the BEST care for our clients, (b) if you were designing a psychology training program and selecting course focus, should specific factor EST exposure/training be emphasized more or should common factors/micro-skills be the focus to make ‘better clinicians’, and (c) how directive can (and should) manualized treatments be?
I was super proud of how well the 1st year counseling students did – they killed the debate (with teams switching positions halfway through).
In a follow-up to the pilot study in Trainng and Education in Professional Psychology on personality assessment, the lab has launched several related data collection projects focused on the area on training. Our national study on assessment competency is live and recruiting (n ~ 400 as of today) and we hope to conclude data collection over the summer and start our analyses. It’s a great time to promote evidence-based assessment and evidence-based assessment training! The new project expands on many of the themes found in our pilot study across all aspects of assessment (personality, cognitive, neuropsychological, and miscellaneous instruments), as well as examining supervision trends and satisfaction and instrument selection heuristics.
We (Ingram and Schmidt) are also launching an up-to-date study on APPIC and intern selection. It’s high time that we determine what “fit” is and the types of criteria used in internship sites (not just across the board, as has been done previously; see Ginkel, Davis, & Michael, 2010; Rodolfa et al., 1999) but at specific types of sites (e.g., VA, Medical School, Counseling Center, etc.). Intern applicants have specific interests so it is worth providing them specific information to enable intentional course/application planning. We anticipate data collection starting within the month.
I’m very excited that my work with Drs. Cribbet (Alabama) and Schmidt (Texas Tech) has been accepted for publication in Training and Education in Professional Psychology. In it, we sampled from 16 APA-approved doctoral programs in clinical and counseling psychology (8 each) to pilot our upcoming national study on assessment training and competency. This is the first published project which examines trainee exposure and competency (both perceived and performance-based) on personality assessments and we are excited for the next phase of this project. For the national study, Pearson Clinical Assessment has partially funded that project.
In the paper we discuss various training implications that we see as a result of patterns of instrument exposure, self-perceived competency, and result reporting information.
If you would like to see a copy of the pre-print, click HERE
I’m exciting that the second paper coming out of the national database of veteran assessment outcomes has been accepted for publication in the Journal of Psychopathology and Behavioral Assessment. In a 7-year national sample, my colleagues and I examined typical score patterns observed across four common service locations within the VA (individual substance abuse treatment, PTSD clinical team, residential polytrauma, and internal medicine). This paper provides the most curent and up-to-date comparison samples available for the MMPI, strengthening information available to clinicians to frame and consider typical presentations within these service clinics.
Here is the pre-print.
I’m excited about a recently accepted publication on the MMPI-2-RF validity scales. This study examined patterns of profile invalidity across a 2 year national sample (n ~ 18,000) and will be published in an upcoming special issue of Psychological Services. It is based on a presentation given at the annual MMPI symposium 2 years ago with my colleagues Drs. Ben-Porath, Tarescavage, and Oehlert.
Here is the pre-print.
I also received received notification from Pearson Clinical Assessments that they will be providing partial research for my grant on the National Assessment Training in Health Service Psychology. This research funding will expand the research presented at the 2018 MMPI symposium (currently under review for publication) summarized in this presentation.
More exciting updates to come soon….
Today was not a writing day…
Thanks to Brittney for all the help cutting out our zombie assistant professors. If this isn’t spooky I don’t know what is! Come on psychology door decoration contest!
First off, I can’t say enough awesome things about Dr. Joe Currin. We’re lucky to have had him join us at TTU. I (along with Brittney) am doing some really cool collaborations with him: we are developing a behavioral theoretical orientation scale and exploring how orientation and considerations of ‘what supervision is’ plays a role in predicting supervisee perceived outcomes. We are also working on a multi-site resiliency project that includes a cross-cultural examination of the Integrative Hope Scale (we’re planning to look at invariance across ethnicity, sexual orientation, and gender).
Otherwise, here are some quick updates
- My work with Dr. Shin Ye Kim on health behaviors and stigma (focusing on medical service utilization in a college sample) is preparing to be submitted. In fact, when I finish writing this it’s back to finalize the results so that we can send it off in the next month!
- Stigma and health research is ongoing! In a project with Patrick Heath, we anticipate submitting a fairly large multi-site proposal to APA examining the relationship of stigma with health behaviors.
- A paper examining the role of affect in the relationship between stigma and treatment intention is being finalized and should be sent out this week (Also with Patrick).
- Craig Warlick took lead on a manuscript examining the concurrent validity of a brief International Personality Item Pool instrument and, after some really supportive feedback from reviewers, I’m hoping to see it accepted soon. The IPIP offers such great AND FREE utility for incorporating personality measurement into everyday practice and research.
- Dr. Brian Cole is leading the PIWIS team (see original parenting scale here or the article validating it here) on a project examining parenting in Latino dads. Not only are we looking at cross-cultural comparisons of parenting practices, but we are also looking at stigma towards parenting services and attitudes that result in decreased training. Data is coming in now and we’re preparing the results section (Thanks go out to Alyssa Dye for helping to prep this – I think she’s even working away now on the weekend on this!)
- I have a paper under review validating the IHS in a large sample of Chinese students.
So many more, but that’s enough for now!
What a great conference to attend with the lab. Simply put, Strong Star puts on awesome conferences and we had a blast in San Antonio – so many awesome speakers talking about the future of PTSD treatment. As we carpooled down we talked a lot about next steps in the lab, so I wanted to highlight some products folks in the lab are part of/planning to submit over the next month that align with what we learned at the 3rd Annual Combat PTSD.
Brittney (my fantastic first year student) is going to be submitting as a first author a paper to APA evaluating the behavioral correlates for the MMPI-2-Restructured Form (MMPI-2-RF) RC3 (Cynicism) and RC9 (Hypomanic Activation) scales in a college student sample (after all, shouldn’t our evidence-based assessments predict behavior?!). She is also working with me on submissions for APA and AACN, looking at MMPI-2-RF validity scale efficacy in a U.S. Army concussion clinic, which explore the role of moderators (e.g., diagnosis, sex, etc.; Ingram & Ternes, 2016; Sharf et al., 2017; Ingram et al., 2018) and the general efficacy of the over reporting scales.
Nikki (although not my advisee I’m super lucky to have her working with me for a second year running) will be submitting a poster to APA on the results of our simulation study of PTSD/mild Traumatic Brain Injury/comorbidity on the MMPI-2-Restructured Form (MMPI-2-RF) validity scales.
Tara (an awesome undergrad RA in every way) came up with an awesome research question while running participants and will be submitting a poster to APA. While moderators have generally focused on demographic charactersistics, Tara was curious if trait characteristics (especially cynicism) influence testing response bias and is taking the lead on a really cool project. I love seeing my undergrad RAs take this sort of killer initiative and generate such thought provoking questions.
There are a few other submissions going out as well, including those looking at stigma and treatment seeking (as well as the development of a GOOD theoretical orientation scale). I’ll tell you more about those next!