Artificial Intelligence in Behavioral and Mental Health Care summarizes recent advances in artificial intelligence as it applies to mental health clinical practice. Each chapter provides a technical description of the advance, review of application in clinical practice, and empirical data on clinical efficacy.
In addition, each chapter includes a discussion of practical issues in clinical settings, ethical considerations, and limitations of use. The book encompasses AI based advances in decision-making, in assessment and treatment, in providing education to clients, robot assisted task completion, and the use of AI for research and data gathering.
This book will be of use to mental health practitioners interested in learning about, or incorporating AI advances into their practice and for researchers interested in a comprehensive review of these advances in one source.
- Summarizes AI advances for use in mental health practice
- Includes advances in AI based decision-making and consultation
- Describes AI applications for assessment and treatment
- Details AI advances in robots for clinical settings
- Provides empirical data on clinical efficacy
- Explores practical issues of use in clinical settings
Expert Systems in Mental Health Care
AI Applications in Decision-Making and Consultation
Casey C. Bennett1,2 and Thomas W. Doub2, 1School of Informatics and Computing, Indiana University, Bloomington, IN, USA, 2Department of Informatics, Centerstone Research Institute, Nashville, TN, USA
Artificial intelligence (AI) based tools hold potential to extend the current capabilities of clinicians, to deal with complex problems and ever-expanding information streams that stretch the limits of human ability. In contrast to previous generations of AI and expert systems, these approaches are increasingly dynamical and less computationalist - less about "rules" and more about leveraging the dynamic interplay of action and observation over time. The (treatment) choices we make change what we observe (clinically, or otherwise), which changes future choices, which affects future observations, and so forth. As humans (clinicians or otherwise), we leverage this fact every day to act "intelligently" in our environment. To best assist us, our clinical computing tools should approximate the same process. Such an approach ties to future developments across the broader healthcare space, e.g., cognitive computing, smart homes, and robotics.
Artificial intelligence; medical decision-making; expert systems; mental health; health care; clinical decision support systems; cognition; cognitive computing; temporal modeling; dynamical systems
Across real-world scenarios - clinical ones included - perceptions (e.g., observations) and actions (e.g., treatments) are structured in a circular fashion (Merleau-Ponty, 1945). The actions we take change what we perceive in the future, and in turn those perceptions may alter the future actions we take (isomorphic to changes in the human visual system due to movement in the world (Gibson, 1979)). There is information inherent in this dynamical process. As humans we leverage this fact every day to act "intelligently" in our environment. We think about problems in a temporally extended fashion, whether it be during treatment of a patient or making a left turn in our car. For instance, when driving a car we don't simply decide to make a left turn and then do it. Rather, there is constant perceptual feedback (e.g., if a pedestrian suddenly appears in the crosswalk, we adjust our actions). This alters further perceptions; we may alter our turning radius to avoid said pedestrian, which results in finding a fire hydrant directly in our path. Given the probability of such a sequence (e.g., how busy the pedestrian traffic is at the crosswalk), we may choose not to turn at the intersection at all, or find an alternate route. The point is that our actions lead to certain perceptions that we use to make decisions. The same is true for clinical decision-making. We are not merely passive observers of "data" - data is a process of interaction. Should not our clinical computing tools approximate the same process? If we want tools to enhance our cognition and/or improve our decision-making, those tools need to fit the way we think about the world. In other words, they should provide a sort of cognitive scaffolding that enables people to do what they do better (Clark, 2004, 2013; Sterelny, 2007).
In this chapter, we describe emerging approaches for doing exactly that, i.e., temporal modeling. Such approaches are ripe for application to health care, where treatment decisions must be made over time, and where continually reevaluating ongoing treatment is critical to optimizing clinical care for individual patients. This is especially true for chronic health conditions, such as mental illness, which forms the bulk of healthcare expenditures in the United States (Orszag & Ellis, 2007). Clinicians do not just make decisions and move forward - rather they are constantly reevaluating those decisions, titrating medications, adjusting treatments, making new observations. It is a very dynamic process, both in terms of the treatment being delivered as well as the cognitive processes of the clinician and patient (e.g., how they integrate information into their decision-making over time (Patel, Kaufman, & Arocha, 2002)). This represents a major obstacle, because currently much of the focus of AI and clinical decision support systems (CDSS) in healthcare is on making a single recommendation at a single timepoint. But that is not how health care really works.
This chapter is laid out as follows. "The History - Expert Systems and Clinical Artificial Intelligence in Health Care" provides a brief history of AI, expert systems, and CDSS in physical and mental healthcare. The successes and failures of such efforts lead into "The Present - Dynamical Approaches to Clinical AI and Expert Systems," where we discuss current research around artificial intelligence in healthcare, including dynamical approaches that explicitly incorporate time. In "The Future," we expand upon this current work to detail future directions around how such approaches can integrate into the broader healthcare space, for example, cognitive computing, smart homes, cyborg clinicians, and robotics. We conclude with a discussion of what this all may mean for the future of health care, mental health, and clinicians and patients alike. AI is a term often loosely applied, with "intelligence" being more of a romantic notion than a precise descriptor (Brooks, 1991). But all is not lost. The aim of this chapter is to help readers understand where we have been, where we are, and where we are going in our ongoing quest to put "intelligence" into AI, for clinical applications and beyond.
The History - Expert Systems and Clinical Artificial Intelligence in Health Care
Efforts to develop AI, both within and outside of health care, have a long history. Some of the earliest successful applications of AI in health care were expert systems (Jackson, 1998; Luxton, 2014). An expert system is a computer system that is designed to emulate the decision-making capabilities and performance of human experts. Traditionally, this was done by eliciting a knowledge base of rules from experts (knowledge base), from which inference about the present state or future could be performed (inference engine) by an end-user (via a user interface), as shown in Figure 2.1. The rules often took the form of "if-then", where the "then" component typically comprised a probability. For instance, if the patient has symptom x, then the probability of disease y is, say, 0.6. A multitude of such rules could then be used to calculate probabilistic recommendations. Figure 2.1
Basic outline of an expert system.
One well-known early example of an expert system in health care was MYCIN, developed in the 1970s at Stanford University. The system was designed to identify bacterial infections and recommend appropriate antibiotic treatment (Shortliffe, 1976). Similar developments were also underway at the same time on the mental health side. For instance, DIAGNO was an early tool for computer-assisted psychiatric diagnosis that was developed at Columbia University in the 1960s and 1970s. It used as input 39 clinical-observation scores processed through a decision tree, resulting in a differential diagnosis. The system achieved comparable performance as human clinicians across a variety of mental disorders (Spitzer & Endicott, 1974), though it was never put to use in real-world practice.
Subsequent years saw the inclusion of expert systems into many CDSS. Decision support, as the name implies, refers to providing information to clinicians, typically at the point of decision-making (Osheroff et al., 2007). However, we should be careful to point out that not all CDSS tools are necessarily expert systems or AI - many are simply hard-coded rules that trigger alerts or messages, containing neither probabilistic rules nor inferential reasoning. Nonetheless, some CDSS tools do embody principles of expert systems. One recent example in mental health care is the TMAP project from UT-Southwestern Medical School for computer-assisted depression medication treatment (Shelton & Trivedi, 2011; Trivedi et al., 2009; Trivedi, Kern, Grannemann, Altshuler, & Sunderajan, 2004). The system used algorithms to suggest appropriate changes to medications and/or dosing via electronic health record systems. It worked well in research studies, though it faced various implementation challenges in practice (see "Ethics and Challenges" section below).
CDSS tools - both those based on expert system models and otherwise - have had a mixed history of success (Garg et al., 2005; Jaspers, Smeulers, Vermeulen, & Peute, 2011; Kawamoto, Houlihan, Balas, & Lobach, 2005). Many are based on evidence-based guidelines (typically derived from expert opinion or statistical averages) that prescribe a one-size-fits-all treatment regimen for every patient, or a standardized sequence of treatment options (Bauer, 2002; Bennett, Doub, & Selove, 2012; Green, 2008). However, real-world patients display individualized characteristics and symptoms that impact treatment effectiveness. As such, clinicians quickly learn to ignore recommendations that say...