Your browser version is outdated. We recommend that you update your browser to the latest version.


Self-Initiated Humour Protocols: A pilot study in the non-clinical population

The Algorithmic Human Development Group at Imperial College London have in the past few years developed the Self-Initiated Humour Protocols (SIHP) providing an algorithmic framework for learning to laugh, which is derived from the Self-Attachment Technique (SAT). This study aims to evaluate the efficacy of SIHP and is based on eight one-hour long weekly sessions and requires daily practice of the protocols for at least 20 minutes. In the first two weeks you will practise the core of SAT by using a Google Cardboard to interact in a virtual reality environment with your childhood avatar created from your childhood photo. In the subsequent six weeks, you will use a chatbot which guides you in practicing the SIHP.

  • You are eligible to take part if you are aged 16-70 without any psychiatric illness or substance abuse and not currently undertaking any other psychological interventions.

  • If you are interested, please read the attached Participant Information Sheet to find out more about the study. Then, if you would like to take part in the study please complete the Participant Consent Form.

Interested to learn about this project but not quite sure if you’d like to commit? We have an introductory info session with Prof. Abbas Edalat, the founder of SAT and SIHP planned for Saturday, March 11th. To find out more about the introductory session and secure your spot, please see our Events & Workshops page.


A Chatbot for Guiding Users in Self-Attachment Technique

Lisa Alazraki

We present a new conversational agent for the delivery of self-attachment therapy (SAT). Our agent is augmented with deep-learning methods to ensure that its responses are empathetic, fluent, diverse and appropriate to a user’s emotional state. We consider empathy to be the most important aspect of a therapeutic interaction and adopt Geoffrey Barrett-Lennard’s formal definition of expressive empathy as our model, in an attempt to create a chatbot able to display compassion toward the user.

The agent’s dialogue is based on a novel dataset we collected — the EmpatheticPersonas dataset — containing 1,181 verbal expressions of emotion and 2,143 empathetic utterances. We use the expressions of emotion in it to train a deep-learning model for the task of emotion recognition, so that the chatbot is able to identify a user’s emotional state from their text input and respond appropriately. Moreover, we use the empathetic utterances in the dataset to craft the responses of the bot, optimizing each response for empathy, fluency and diversity.

Lastly, we craft human-like characters for our chatbot which users can choose from and interact with, conditioning their dialogue on utterances written by individuals belonging specific sex and age groups.

We evaluate the application through a non-clinical trial with 16 participants already familiar with self-attachment therapy, as well as two medical professionals specialized in mental health, comparing its performance against a previous version of the SAT chatbot. We show that our approach is scored highly for perceived usefulness, ability to communicate empathetically and user engagement, and that it performs significantly better than the previous version in all three areas. Our agent’s ability to recognize human feelings is also assessed positively, with 63% of trial participants agreeing that the bot was successful in guessing their emotions.

According to the feedback received, we identify three main areas where improvements should be made: (1) the chatbot should be able to recognize and respond to a wider and more nuanced range of emotions; (2) when asking for feedback after the completion of SAT protocols, the chatbot should accept as a valid answer the fact that a user may not have detected any change in their mood; (3) the bot should refer users to appropriate emotional support services upon detection of any input suggestive of extreme distress or self-harm.