Researchers from Oak Ridge Laboratory have recently created an AI system intended to facilitate the diagnosis and treatment of individuals who have experienced significant childhood adversity. According to The Next Web, the AI system is designed to be “explainable”, unlike many AI models which are black boxes, by returning snippets of the data used to render its decisions.
The term “Adverse Childhood Experience” (ACEs) refers to traumatic events that occur to people before the age of 18 and they include all different forms of abuse and neglect as well as incarceration, substance abuse, domestic violence towards a parent, and mental illness of a parent. ACES can have lifelong effects on people’s development and well-being, and as with many medical issues earlier detection and treatment can improve outcomes for the people involved. The type of effective interventions for those who have experienced ACEs are well known and well-studied, but mental health treatment agencies often lack the resources to diagnose a person and see them through the full course of treatment.
The AI system was developed by two medical researchers from the University of Tennessee's Oak Ridge National Laboratory, Nariman Ammar and Arash Shaban-Nejad. In a preprint paper recently released through JMIR Medical Informatics, the research team described the development and testing of their AI model, which is designed to aid medical practitioners in diagnosing and treating those affected by ACEs.
The AI model is intended to suggest certain interventions to medical practitioners, making it easier for practitioners to help people suffering from ACEs. The current process for getting an individual suffering from ACE treatment is a long, complex one. In order to diagnose people affected by ACEs, medical professionals must receive advanced training in the correct type of questions to ask, then use the right questions to gain insight into what events shaped a person’s childhood and how the events might have affected them. When considering the many different potential combinations of questions and answers, it can be quite difficult for a provider to recommend a specific type of intervention. Beyond this, once appointments with medical or governmental agencies have been made, there will be a long line of healthcare and government workers dealing with a patient, and they are not guaranteed to have the correct amount of training or understanding of ACEs.
In order to tackle these issues, the research team designed an AI application that works similar to a chatbot for tech support purposes. Those who use the AI system feed patient information into the model, which returns a recommendation for certain interventions on a certain schedule, based upon the database the model was trained on. The model takes natural language inputs into account, interpreting phrases like “my house has no heating” as indicators of potential childhood adversity, checking these contextual statements against a medical guide for the treatment of ACEs, recommending the best actions.
The responses to user entries aren’t hardcoded, rather they are dynamic, using a system of webhooks that trigger and invoke external service endpoints that generate the dynamic responses. The AI system decides which questions should be asked based on responses given to previous questions, with the end goal being to enable the collection of the most useful, most relevant information in the fewest questions. As previously mentioned, the system is also explainable, exposing the data it used to come to decisions regarding interventions. As a result, the system is traceable, and medical professionals should be able to follow the logic used by the system backward.
The AI system developed by the Oak Ridge Laboratory researchers is one of the first data-driven approaches to enabling medical practitioners to better diagnose people with ACEs. While this is an impressive achievement itself, it's possible that the general approach used to create the AI system and chatbot could be extrapolated out to other domains and used to diagnose and treat other forms of mental illness. The methods used to expose the data used to come to certain decisions could also be leveraged to increase transparency and explainability for machine learning systems as a whole.