AI can tell if you are getting a therapy session

0


Cognitive behavioral therapy (CBT) is one of the most widely used forms of talk therapy in the United States. There are 11 criteria by which cognitive behavioral therapists are typically judged in training. What if their skills could be assessed and improved with feedback from AI? This is at the core of new research from the USC Viterbi School of Engineering, in partnership with the University of Pennsylvania and the University of Washington. It is the first study of CBT sessions conducted with real people in real, therapeutic conversations. The results were recently published in Plus one.

Over 1,100 real conversations between prospective therapists and patients were analyzed by an AI developed by the Signal Analysis and Interpretation Laboratory (SAIL) at the Viterbi School of Engineering at the University of Southern California. The challenge for AI, says lead author Nikolaos Flemotomos, a PhD student in electrical engineering at USC, is to understand multiple speakers and gain meaning from just the text of a conversation. With therapists in training, human evaluators usually rate their sessions. An AI could achieve what a human evaluator could achieve with 73 percent accuracy.

The AI ​​was able to assess the therapist’s interpersonal skills and determine whether the therapists created the right structure for the session (for example, when bringing up a patient’s homework). In addition, the AI ​​was able to determine if a therapist was focusing appropriately on the patient instead of telling too much of their own story, and if they were able to work and develop a relationship with their patient. All of these aspects are taken into account to generate a single aggregated quality metric.

The AI ​​only assessed speech patterns through automatically generated text transcriptions, not the sound quality of the speakers during the sessions. The challenge of evaluating such sessions implies to Flemotomos that, given the potential range of languages ​​and errors involved in automatic transcription, understanding and evaluating these conversations and the protocol associated with CBT is a particular challenge.

Such assessments, usually performed by humans, are necessary for training and providing performance-based feedback to a therapist, leading to improved clinical outcomes. The goal, according to the researchers, is to automatically generate metrics from a recorded session to facilitate these applications.

“… our goal is not to replace human support, but to increase the efficiency of the supervisor and also to offer an instrument for self-assessment,” said the researchers.

With this tool, the process could be scaled to meet the increasing demand for mental health services with trained professionals.

For continuous improvement in this area, Flemotomos says: “We want these to be adopted in real-world clinics.”

The next step is to add tonal or so-called prosodic qualities of spoken interaction to this tool to enhance its capabilities.

Flemotomos spoke of the personal attractiveness of such work: “Helping people directly through technology instead of just dealing with the technical aspects of an algorithm is really rewarding.”

In addition to Flemotomos, Victor Martinez and Zhuohao Chen, PhD students from the Signal Analysis and Interpretation Laboratory at the University of Southern California contributed to the development of the AI ​​tools and software, all under the direction of lead author of the study Shrikanth Narayanan, USC University Professor and Nikias Chair in engineering, in collaboration with University of Pennsylvania Assistant Professor Torrey Creed and University of Washington Research Professor David Atkins.

###


Disclaimer: AAAS and EurekAlert! are not responsible for the accuracy of press releases sent to EurekAlert! by contributing institutions or for the use of information via the EurekAlert system.


Leave A Reply

Your email address will not be published.